text
stringlengths
5
631k
id
stringlengths
14
178
metadata
dict
__index_level_0__
int64
0
647
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨 # This file was automatically generated from src/transformers/models/sam_hq/modular_sam_hq.py. # Do NOT edit this file manually as any edits will be overwritten by the generation of # the file from the modular. If any change should be done, please apply the change to the # modular_sam_hq.py file directly. One of our CI enforces this. # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨 # coding=utf-8 # Copyright 2025 Google Inc. HuggingFace Inc. team. All rights reserved. # # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from ...configuration_utils import PretrainedConfig class SamHQPromptEncoderConfig(PretrainedConfig): r""" This is the configuration class to store the configuration of a [`SamHQPromptEncoderModel`].The [`SamHQPromptEncoderModel`] module is used to encode the input 2D points and bounding boxes. Instantiating a configuration defaults will yield a similar configuration to that of the SAM_HQ model. The configuration is used to store the configuration of the model. [Uminosachi/sam-hq](https://huggingface.co/Uminosachi/sam-hq) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model's output.Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 256): Dimensionality of the hidden states. image_size (`int`, *optional*, defaults to 1024): The expected output resolution of the image. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch. mask_input_channels (`int`, *optional*, defaults to 16): The number of channels to be fed to the `MaskDecoder` module. num_point_embeddings (`int`, *optional*, defaults to 4): The number of point embeddings to be used. hidden_act (`str`, *optional*, defaults to `"gelu"`): The non-linear activation function in the encoder and pooler. """ base_config_key = "prompt_encoder_config" def __init__( self, hidden_size=256, image_size=1024, patch_size=16, mask_input_channels=16, num_point_embeddings=4, hidden_act="gelu", layer_norm_eps=1e-6, **kwargs, ): super().__init__(**kwargs) self.hidden_size = hidden_size self.image_size = image_size self.patch_size = patch_size self.image_embedding_size = image_size // patch_size self.mask_input_channels = mask_input_channels self.num_point_embeddings = num_point_embeddings self.hidden_act = hidden_act self.layer_norm_eps = layer_norm_eps class SamHQVisionConfig(PretrainedConfig): r""" This is the configuration class to store the configuration of a [`SamHQVisionModel`]. It is used to instantiate a SAM_HQ vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration defaults will yield a similar configuration to that of the SAM_HQ ViT-h [facebook/sam_hq-vit-huge](https://huggingface.co/facebook/sam_hq-vit-huge) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. output_channels (`int`, *optional*, defaults to 256): Dimensionality of the output channels in the Patch Encoder. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. num_channels (`int`, *optional*, defaults to 3): Number of channels in the input image. image_size (`int`, *optional*, defaults to 1024): Expected resolution. Target size of the resized input image. patch_size (`int`, *optional*, defaults to 16): Size of the patches to be extracted from the input image. hidden_act (`str`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. initializer_range (`float`, *optional*, defaults to 1e-10): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. qkv_bias (`bool`, *optional*, defaults to `True`): Whether to add a bias to query, key, value projections. mlp_ratio (`float`, *optional*, defaults to 4.0): Ratio of mlp hidden dim to embedding dim. use_abs_pos (`bool`, *optional*, defaults to `True`): Whether to use absolute position embedding. use_rel_pos (`bool`, *optional*, defaults to `True`): Whether to use relative position embedding. window_size (`int`, *optional*, defaults to 14): Window size for relative position. global_attn_indexes (`list[int]`, *optional*, defaults to `[2, 5, 8, 11]`): The indexes of the global attention layers. num_pos_feats (`int`, *optional*, defaults to 128): The dimensionality of the position embedding. mlp_dim (`int`, *optional*): The dimensionality of the MLP layer in the Transformer encoder. If `None`, defaults to `mlp_ratio * hidden_size`. Example: ```python >>> from transformers import ( ... SamHQVisionConfig, ... SamHQVisionModel, ... ) >>> # Initializing a SamHQVisionConfig with `"facebook/sam_hq-vit-huge"` style configuration >>> configuration = SamHQVisionConfig() >>> # Initializing a SamHQVisionModel (with random weights) from the `"facebook/sam_hq-vit-huge"` style configuration >>> model = SamHQVisionModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```""" base_config_key = "vision_config" model_type = "sam_hq_vision_model" def __init__( self, hidden_size=768, output_channels=256, num_hidden_layers=12, num_attention_heads=12, num_channels=3, image_size=1024, patch_size=16, hidden_act="gelu", layer_norm_eps=1e-06, attention_dropout=0.0, initializer_range=1e-10, qkv_bias=True, mlp_ratio=4.0, use_abs_pos=True, use_rel_pos=True, window_size=14, global_attn_indexes=[2, 5, 8, 11], num_pos_feats=128, mlp_dim=None, **kwargs, ): super().__init__(**kwargs) self.hidden_size = hidden_size self.output_channels = output_channels self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.num_channels = num_channels self.image_size = image_size self.patch_size = patch_size self.hidden_act = hidden_act self.layer_norm_eps = layer_norm_eps self.attention_dropout = attention_dropout self.initializer_range = initializer_range self.qkv_bias = qkv_bias self.mlp_ratio = mlp_ratio self.use_abs_pos = use_abs_pos self.use_rel_pos = use_rel_pos self.window_size = window_size self.global_attn_indexes = global_attn_indexes self.num_pos_feats = num_pos_feats self.mlp_dim = int(hidden_size * mlp_ratio) if mlp_dim is None else mlp_dim class SamHQMaskDecoderConfig(PretrainedConfig): r""" This is the configuration class to store the configuration of a [`SamHQMaskDecoder`]. It is used to instantiate a SAM_HQ mask decoder to the specified arguments, defining the model architecture. Instantiating a configuration defaults will yield a similar configuration to that of the SAM_HQ-vit-h [facebook/sam_hq-vit-huge](https://huggingface.co/facebook/sam_hq-vit-huge) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 256): Dimensionality of the hidden states. hidden_act (`str`, *optional*, defaults to `"relu"`): The non-linear activation function used inside the `SamHQMaskDecoder` module. mlp_dim (`int`, *optional*, defaults to 2048): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (`int`, *optional*, defaults to 2): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 8): Number of attention heads for each attention layer in the Transformer encoder. attention_downsample_rate (`int`, *optional*, defaults to 2): The downsampling rate of the attention layer. num_multimask_outputs (`int`, *optional*, defaults to 3): The number of outputs from the `SamHQMaskDecoder` module. In the Segment Anything paper, this is set to 3. iou_head_depth (`int`, *optional*, defaults to 3): The number of layers in the IoU head module. iou_head_hidden_dim (`int`, *optional*, defaults to 256): The dimensionality of the hidden states in the IoU head module. layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. vit_dim (`int`, *optional*, defaults to 768): Dimensionality of the Vision Transformer (ViT) used in the `SamHQMaskDecoder` module. """ base_config_key = "mask_decoder_config" def __init__( self, hidden_size=256, hidden_act="relu", mlp_dim=2048, num_hidden_layers=2, num_attention_heads=8, attention_downsample_rate=2, num_multimask_outputs=3, iou_head_depth=3, iou_head_hidden_dim=256, layer_norm_eps=1e-6, vit_dim=768, **kwargs, ): super().__init__(**kwargs) self.hidden_size = hidden_size self.hidden_act = hidden_act self.mlp_dim = mlp_dim self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.attention_downsample_rate = attention_downsample_rate self.num_multimask_outputs = num_multimask_outputs self.iou_head_depth = iou_head_depth self.iou_head_hidden_dim = iou_head_hidden_dim self.layer_norm_eps = layer_norm_eps self.vit_dim = vit_dim class SamHQConfig(PretrainedConfig): r""" [`SamHQConfig`] is the configuration class to store the configuration of a [`SamHQModel`]. It is used to instantiate a SAM-HQ model according to the specified arguments, defining the vision model, prompt-encoder model and mask decoder configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the SAM-HQ-ViT-H [sushmanth/sam_hq_vit_h](https://huggingface.co/sushmanth/sam_hq_vit_h) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vision_config (Union[`dict`, `SamHQVisionConfig`], *optional*): Dictionary of configuration options used to initialize [`SamHQVisionConfig`]. prompt_encoder_config (Union[`dict`, `SamHQPromptEncoderConfig`], *optional*): Dictionary of configuration options used to initialize [`SamHQPromptEncoderConfig`]. mask_decoder_config (Union[`dict`, `SamHQMaskDecoderConfig`], *optional*): Dictionary of configuration options used to initialize [`SamHQMaskDecoderConfig`]. kwargs (*optional*): Dictionary of keyword arguments. """ model_type = "sam_hq" sub_configs = { "prompt_encoder_config": SamHQPromptEncoderConfig, "mask_decoder_config": SamHQMaskDecoderConfig, "vision_config": SamHQVisionConfig, } def __init__( self, vision_config=None, prompt_encoder_config=None, mask_decoder_config=None, initializer_range=0.02, **kwargs, ): super().__init__(**kwargs) vision_config = vision_config if vision_config is not None else {} prompt_encoder_config = prompt_encoder_config if prompt_encoder_config is not None else {} mask_decoder_config = mask_decoder_config if mask_decoder_config is not None else {} if isinstance(vision_config, SamHQVisionConfig): vision_config = vision_config.to_dict() if isinstance(prompt_encoder_config, SamHQPromptEncoderConfig): prompt_encoder_config = prompt_encoder_config.to_dict() if isinstance(mask_decoder_config, SamHQMaskDecoderConfig): mask_decoder_config = mask_decoder_config.to_dict() self.vision_config = SamHQVisionConfig(**vision_config) self.prompt_encoder_config = SamHQPromptEncoderConfig(**prompt_encoder_config) self.mask_decoder_config = SamHQMaskDecoderConfig(**mask_decoder_config) self.initializer_range = initializer_range __all__ = ["SamHQVisionConfig", "SamHQMaskDecoderConfig", "SamHQPromptEncoderConfig", "SamHQConfig"]
transformers/src/transformers/models/sam_hq/configuration_sam_hq.py/0
{ "file_path": "transformers/src/transformers/models/sam_hq/configuration_sam_hq.py", "repo_id": "transformers", "token_count": 5920 }
538
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """PyTorch SeamlessM4Tv2 model.""" import copy import math from dataclasses import dataclass from typing import Optional, Union import torch import torch.utils.checkpoint from torch import Tensor, nn from torch.nn import CrossEntropyLoss from ...activations import ACT2FN from ...cache_utils import Cache, DynamicCache, EncoderDecoderCache from ...generation import GenerationMixin from ...integrations.deepspeed import is_deepspeed_zero3_enabled from ...integrations.fsdp import is_fsdp_managed_module from ...modeling_attn_mask_utils import _prepare_4d_attention_mask, _prepare_4d_causal_attention_mask from ...modeling_layers import GradientCheckpointingLayer from ...modeling_outputs import ( BaseModelOutput, BaseModelOutputWithPastAndCrossAttentions, Seq2SeqLMOutput, Seq2SeqModelOutput, Wav2Vec2BaseModelOutput, ) from ...modeling_utils import PreTrainedModel from ...utils import ModelOutput, auto_docstring, logging from ...utils.deprecation import deprecate_kwarg from .configuration_seamless_m4t_v2 import SeamlessM4Tv2Config logger = logging.get_logger(__name__) SEAMLESS_M4T_V2_COMMON_CUSTOM_ARGS = r""" input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length, num_banks)`): Input audio features. This should be returned by the [`SeamlessM4TFeatureExtractor`] class or the [`SeamlessM4TProcessor`] class. See [`SeamlessM4TFeatureExtractor.__call__`] for details. decoder_input_ids (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are decoder input IDs?](../glossary#decoder-input-ids) Bart uses the `eos_token_id` as the starting token for `decoder_input_ids` generation. If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see `past_key_values`). For translation and summarization training, `decoder_input_ids` should be provided. If no `decoder_input_ids` is provided, the model will create this tensor by shifting the `input_ids` to the right for denoising pre-training following the paper. decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also be used by default. If you want to change padding behavior, you should read [`modeling_bart._prepare_decoder_attention_mask`] and modify to your needs. See diagram 1 in [the paper](https://huggingface.co/papers/1910.13461) for more information on the default strategy. inputs_embeds (`torch.FloatTensor` of shape`(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. """ @dataclass @auto_docstring( custom_intro=""" Class defining the generated outputs from [`SeamlessM4Tv2Model`], [`SeamlessM4Tv2ForTextToText`], [`SeamlessM4Tv2ForTextToSpeech`], [`SeamlessM4Tv2ForSpeechToSpeech`] and [`SeamlessM4Tv2ForTextToSpeech`]. """ ) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TGenerationOutput with SeamlessM4T->SeamlessM4Tv2 class SeamlessM4Tv2GenerationOutput(ModelOutput): r""" waveform (`torch.FloatTensor` of shape `(batch_size, sequence_length)`): The final audio waveform predicted by the model. waveform_lengths (`torch.IntTensor` of shape `(batch_size,)`, *optional*): The length in samples of each element in the `waveform` batch. sequences (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): The generated translated sequences. This is the output of the text-to-text or the speech-to-text models. The second dimension (sequence_length) is either equal to `max_length` or shorter if all batches finished early due to the `eos_token_id`. unit_sequences (`torch.LongTensor` of shape `(batch_size, unit_sequence_length)`, *optional*): The generated translated unit sequences. This is the output of the text-to-units model. The second dimension (unit_sequence_length) is either equal to `t2u_max_length` or shorter if all batches finished early due to the `t2u_eos_token_id`. """ waveform: Optional[torch.FloatTensor] = None waveform_lengths: Optional[torch.IntTensor] = None sequences: Optional[tuple[torch.FloatTensor]] = None unit_sequences: Optional[tuple[torch.FloatTensor]] = None @dataclass @auto_docstring( custom_intro=""" Class defining the outputs from [`SeamlessM4Tv2TextToUnitDecoder`]. """ ) class SeamlessM4Tv2TextToUnitDecoderOutput(ModelOutput): r""" padding_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Indicates which inputs are to be ignored due to padding, where elements are either 1 for *not masked* or 0 for *masked* """ last_hidden_state: Optional[torch.FloatTensor] = None hidden_states: Optional[tuple[torch.FloatTensor]] = None attentions: Optional[tuple[torch.FloatTensor]] = None padding_mask: Optional[torch.Tensor] = None @dataclass @auto_docstring( custom_intro=""" Class defining the outputs from [`SeamlessM4Tv2TextToUnitForConditionalGeneration`] and [`SeamlessM4Tv2TextToUnitModel`]. """ ) class SeamlessM4Tv2TextToUnitOutput(ModelOutput): r""" last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): Sequence of hidden-states at the output of the last layer of the decoder of the model. If `past_key_values` is used only the last hidden-state of the sequences of shape `(batch_size, 1, hidden_size)` is output. padding_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Indicates which inputs are to be ignored due to padding, where elements are either 1 for *not masked* or 0 for *masked* loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): Language modeling loss. """ last_hidden_state: Optional[torch.FloatTensor] = None padding_mask: Optional[torch.Tensor] = None decoder_hidden_states: Optional[tuple[torch.FloatTensor]] = None decoder_attentions: Optional[tuple[torch.FloatTensor]] = None encoder_last_hidden_state: Optional[torch.FloatTensor] = None encoder_hidden_states: Optional[tuple[torch.FloatTensor]] = None encoder_attentions: Optional[tuple[torch.FloatTensor]] = None loss: Optional[torch.FloatTensor] = None ############ UTILS ################ # Copied from transformers.models.roberta.modeling_roberta.create_position_ids_from_input_ids def create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0): """ Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols are ignored. This is modified from fairseq's `utils.make_positions`. Args: x: torch.Tensor x: Returns: torch.Tensor """ # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA. mask = input_ids.ne(padding_idx).int() incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask return incremental_indices.long() + padding_idx # Copied from transformers.models.bart.modeling_bart.shift_tokens_right def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int): """ Shift input ids one token to the right. """ shifted_input_ids = input_ids.new_zeros(input_ids.shape) shifted_input_ids[:, 1:] = input_ids[:, :-1].clone() shifted_input_ids[:, 0] = decoder_start_token_id if pad_token_id is None: raise ValueError("self.model.config.pad_token_id has to be defined.") # replace possible -100 values in labels by `pad_token_id` shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) return shifted_input_ids def _compute_new_attention_mask(hidden_states: torch.Tensor, seq_lens: torch.Tensor): """ Computes an attention mask of the form `(batch, seq_len)` with an attention for each element in the batch that stops at the corresponding element in `seq_lens`. Args: hidden_states (`torch.FloatTensor` of shape `(batch, seq_len, *)`): The sequences to mask, where `*` is any number of sequence-specific dimensions including none. seq_lens (`torch.Tensor` of shape `(batch)`: Each element represents the length of the sequence at the same index in `hidden_states` Returns: `torch.FloatTensor`: The float attention mask of shape `(batch, seq_len)` """ batch_size, mask_seq_len = hidden_states.shape[:2] indices = torch.arange(mask_seq_len, device=seq_lens.device).expand(batch_size, -1) bool_mask = indices >= seq_lens.unsqueeze(1).expand(-1, mask_seq_len) mask = hidden_states.new_ones((batch_size, mask_seq_len)) mask = mask.masked_fill(bool_mask, 0) return mask # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.format_speech_generation_kwargs with SeamlessM4T->SeamlessM4Tv2 def format_speech_generation_kwargs(kwargs): """ Format kwargs for SeamlessM4Tv2 models that generate speech, attribute kwargs to either the text generation or the speech generation models. Args: kwargs (`dict`)`: Keyword arguments are of two types: - Without a prefix, they will be entered as `**kwargs` for the `generate` method of each sub-model, except for `decoder_input_ids` which will only be passed through the text components. - With a *text_* or *speech_* prefix, they will be input for the `generate` method of the text model and speech model respectively. It has the priority over the keywords without a prefix. This means you can, for example, specify a generation strategy for one generation but not for the other. """ # attribute kwargs to models kwargs_text = {} kwargs_speech = {} for key, value in kwargs.items(): if key.startswith("text_"): key = key[len("text_") :] kwargs_text[key] = value elif key.startswith("speech_"): key = key[len("speech_") :] kwargs_speech[key] = value elif key == "generation_config": kwargs_text[key] = value else: # If the key is already in a specific config, then it's been set with a # submodules specific value and we don't override if key not in kwargs_text: kwargs_text[key] = value if key not in kwargs_speech: kwargs_speech[key] = value return kwargs_text, kwargs_speech ############ SPEECH ENCODER related code ################ class SeamlessM4Tv2ConformerFeatureProjection(nn.Module): # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TConformerFeatureProjection.__init__ def __init__(self, config): super().__init__() self.layer_norm = nn.LayerNorm(config.feature_projection_input_dim, eps=config.layer_norm_eps) self.projection = nn.Linear(config.feature_projection_input_dim, config.hidden_size) self.dropout = nn.Dropout(config.speech_encoder_dropout) def forward(self, hidden_states): # non-projected hidden states are needed for quantization norm_hidden_states = self.layer_norm(hidden_states.to(self.layer_norm.weight.dtype)) hidden_states = self.projection(norm_hidden_states) hidden_states = self.dropout(hidden_states) return hidden_states # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TConformerFeedForward with SeamlessM4T->SeamlessM4Tv2 class SeamlessM4Tv2ConformerFeedForward(nn.Module): def __init__(self, config, act_fn=None, dropout=None): super().__init__() dropout = dropout if dropout is not None else config.speech_encoder_dropout act_fn = act_fn if act_fn is not None else config.speech_encoder_hidden_act self.intermediate_dropout = nn.Dropout(dropout) self.intermediate_dense = nn.Linear(config.hidden_size, config.speech_encoder_intermediate_size) self.intermediate_act_fn = ACT2FN[act_fn] if isinstance(act_fn, str) else act_fn self.output_dense = nn.Linear(config.speech_encoder_intermediate_size, config.hidden_size) self.output_dropout = nn.Dropout(dropout) def forward(self, hidden_states): hidden_states = self.intermediate_dense(hidden_states) hidden_states = self.intermediate_act_fn(hidden_states) hidden_states = self.intermediate_dropout(hidden_states) hidden_states = self.output_dense(hidden_states) hidden_states = self.output_dropout(hidden_states) return hidden_states class SeamlessM4Tv2ConformerConvolutionModule(nn.Module): """Convolution block used in the conformer block. Uses a causal depthwise convolution similar to that described in Section 2.1 of https://huggingface.co/papers/1609.03499 """ def __init__(self, config): super().__init__() if (config.conv_depthwise_kernel_size - 1) % 2 == 1: raise ValueError("`config.conv_depthwise_kernel_size` should be a odd number for 'SAME' padding") self.layer_norm = nn.LayerNorm(config.hidden_size) self.pointwise_conv1 = nn.Conv1d( config.hidden_size, 2 * config.hidden_size, kernel_size=1, stride=1, padding=0, bias=False, ) self.glu = nn.GLU(dim=1) self.depthwise_conv = nn.Conv1d( config.hidden_size, config.hidden_size, config.conv_depthwise_kernel_size, stride=1, padding=0, groups=config.hidden_size, bias=False, ) self.depthwise_layer_norm = nn.LayerNorm(config.hidden_size) self.activation = ACT2FN[config.speech_encoder_hidden_act] self.pointwise_conv2 = nn.Conv1d( config.hidden_size, config.hidden_size, kernel_size=1, stride=1, padding=0, bias=False, ) self.dropout = nn.Dropout(config.speech_encoder_dropout) def forward(self, hidden_states, attention_mask=None): hidden_states = self.layer_norm(hidden_states) # Ensure that we do not leak padded positions in depthwise convolution. # Put 0 where necessary if attention_mask is not None: hidden_states = hidden_states.masked_fill(~attention_mask.bool().unsqueeze(-1), 0.0) # exchange the temporal dimension and the feature dimension hidden_states = hidden_states.transpose(1, 2) # GLU mechanism # => (batch, 2*channel, dim) hidden_states = self.pointwise_conv1(hidden_states) # => (batch, channel, dim) hidden_states = self.glu(hidden_states) # Pad the sequence entirely on the left because of causal convolution. hidden_states = torch.nn.functional.pad(hidden_states, (self.depthwise_conv.kernel_size[0] - 1, 0)) # 1D Depthwise Conv hidden_states = self.depthwise_conv(hidden_states) hidden_states = self.depthwise_layer_norm(hidden_states.transpose(1, 2)).transpose(1, 2) hidden_states = self.activation(hidden_states) hidden_states = self.pointwise_conv2(hidden_states) hidden_states = self.dropout(hidden_states) hidden_states = hidden_states.transpose(1, 2) return hidden_states class SeamlessM4Tv2ConformerSelfAttention(nn.Module): """Construct a SeamlessM4Tv2ConformerSelfAttention object. Can be enhanced with relative position embeddings. """ def __init__(self, config, use_position_embeddings=True): super().__init__() self.head_size = config.hidden_size // config.speech_encoder_attention_heads self.num_heads = config.speech_encoder_attention_heads self.position_embeddings_type = config.position_embeddings_type if use_position_embeddings else None self.linear_q = nn.Linear(config.hidden_size, config.hidden_size) self.linear_k = nn.Linear(config.hidden_size, config.hidden_size) self.linear_v = nn.Linear(config.hidden_size, config.hidden_size) self.linear_out = nn.Linear(config.hidden_size, config.hidden_size) self.dropout = nn.Dropout(p=config.speech_encoder_dropout) if self.position_embeddings_type == "relative_key": self.left_max_position_embeddings = config.left_max_position_embeddings self.right_max_position_embeddings = config.right_max_position_embeddings num_positions = self.left_max_position_embeddings + self.right_max_position_embeddings + 1 self.distance_embedding = nn.Embedding(num_positions, self.head_size) def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: # self-attention mechanism batch_size, sequence_length, hidden_size = hidden_states.size() # make sure query/key states can be != value states query_key_states = hidden_states value_states = hidden_states # project query_key_states and value_states query = self.linear_q(query_key_states).view(batch_size, -1, self.num_heads, self.head_size) key = self.linear_k(query_key_states).view(batch_size, -1, self.num_heads, self.head_size) value = self.linear_v(value_states).view(batch_size, -1, self.num_heads, self.head_size) # => (batch, head, time1, d_k) query = query.transpose(1, 2) key = key.transpose(1, 2) value = value.transpose(1, 2) attn_weights = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(self.head_size) if self.position_embeddings_type == "relative_key": query_length, key_length = query.shape[2], key.shape[2] position_ids_l = torch.arange(query_length, dtype=torch.long, device=hidden_states.device).view(-1, 1) position_ids_r = torch.arange(key_length, dtype=torch.long, device=hidden_states.device).view(1, -1) distance = position_ids_r - position_ids_l distance = torch.clamp(distance, -self.left_max_position_embeddings, self.right_max_position_embeddings) positional_embedding = self.distance_embedding(distance + self.left_max_position_embeddings) positional_embedding = positional_embedding.to(dtype=query.dtype) # fp16 compatibility relative_position_attn_weights = torch.einsum("bhld,lrd->bhlr", query, positional_embedding) attn_weights = attn_weights + (relative_position_attn_weights / math.sqrt(self.head_size)) # apply attention_mask if necessary if attention_mask is not None: attn_weights = attn_weights + attention_mask # => (batch, head, time1, time2) attn_weights = torch.softmax(attn_weights, dim=-1) attn_weights = self.dropout(attn_weights) # => (batch, head, time1, d_k) attn_output = torch.matmul(attn_weights, value) # => (batch, time1, hidden_size) attn_output = attn_output.transpose(1, 2).reshape(batch_size, -1, self.num_heads * self.head_size) attn_output = self.linear_out(attn_output) if not output_attentions: attn_weights = None return attn_output, attn_weights class SeamlessM4Tv2ConformerEncoderLayer(GradientCheckpointingLayer): """Conformer block based on https://huggingface.co/papers/2005.08100.""" # Copied from transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerEncoderLayer.__init__ with Wav2Vec2->SeamlessM4Tv2, attention_dropout->speech_encoder_dropout, torch.nn->nn def __init__(self, config): super().__init__() embed_dim = config.hidden_size dropout = config.speech_encoder_dropout # Feed-forward 1 self.ffn1_layer_norm = nn.LayerNorm(embed_dim) self.ffn1 = SeamlessM4Tv2ConformerFeedForward(config) # Self-Attention self.self_attn_layer_norm = nn.LayerNorm(embed_dim) self.self_attn_dropout = nn.Dropout(dropout) self.self_attn = SeamlessM4Tv2ConformerSelfAttention(config) # Conformer Convolution self.conv_module = SeamlessM4Tv2ConformerConvolutionModule(config) # Feed-forward 2 self.ffn2_layer_norm = nn.LayerNorm(embed_dim) self.ffn2 = SeamlessM4Tv2ConformerFeedForward(config) self.final_layer_norm = nn.LayerNorm(embed_dim) def forward( self, hidden_states, attention_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, conv_attention_mask: Optional[torch.Tensor] = None, ): hidden_states = hidden_states # 1. Feed-Forward 1 layer residual = hidden_states hidden_states = self.ffn1_layer_norm(hidden_states) hidden_states = self.ffn1(hidden_states) hidden_states = hidden_states * 0.5 + residual residual = hidden_states # 2. Self-Attention layer hidden_states = self.self_attn_layer_norm(hidden_states) hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, output_attentions=output_attentions, ) hidden_states = self.self_attn_dropout(hidden_states) hidden_states = hidden_states + residual # 3. Convolutional Layer residual = hidden_states hidden_states = self.conv_module(hidden_states, attention_mask=conv_attention_mask) hidden_states = residual + hidden_states # 4. Feed-Forward 2 Layer residual = hidden_states hidden_states = self.ffn2_layer_norm(hidden_states) hidden_states = self.ffn2(hidden_states) hidden_states = hidden_states * 0.5 + residual hidden_states = self.final_layer_norm(hidden_states) return hidden_states, attn_weights class SeamlessM4Tv2ConformerEncoder(nn.Module): def __init__(self, config): super().__init__() self.config = config self.dropout = nn.Dropout(config.speech_encoder_dropout) self.layers = nn.ModuleList( [SeamlessM4Tv2ConformerEncoderLayer(config) for _ in range(config.speech_encoder_layers)] ) self.layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.gradient_checkpointing = False def _apply_chunk_attention(self, attention_mask, hidden_states): """ Creates a chunk attention mask. It creates a mask to prevent attention across chunks, ensuring that each position attends only to positions within its own chunk. If a left chunk overlap is specified (`speech_encoder_chunk_size` in the configuration), the attention mask is adjusted accordingly to allow each position to also attends the `speech_encoder_chunk_size - 1` previous chunks. """ sequence_len = hidden_states.shape[1] chunk_indices = torch.arange(sequence_len, device=hidden_states.device) chunk_indices = torch.div(chunk_indices, self.config.speech_encoder_chunk_size).long() start_indices = torch.full_like(chunk_indices, 0) if self.config.speech_encoder_left_chunk_num >= 0: start_indices = (chunk_indices - self.config.speech_encoder_left_chunk_num).clamp_(min=0) start_indices = start_indices * self.config.speech_encoder_chunk_size start_indices = start_indices start_indices = start_indices.unsqueeze(1).expand(-1, sequence_len) end_indices = ((chunk_indices + 1) * self.config.speech_encoder_chunk_size).clamp_(max=sequence_len) end_indices = end_indices.unsqueeze(1).expand(-1, sequence_len) indices = torch.arange(sequence_len, device=hidden_states.device).unsqueeze(0).expand(sequence_len, -1) chunk_mask = (indices < start_indices) | (indices >= end_indices) chunk_mask = chunk_mask.unsqueeze(0).unsqueeze(0) attention_mask = chunk_mask if attention_mask is None else (attention_mask.bool() | chunk_mask) attention_mask = attention_mask.to(dtype=hidden_states.dtype) return attention_mask def forward( self, hidden_states, attention_mask=None, output_attentions=False, output_hidden_states=False, return_dict=True, ): all_hidden_states = () if output_hidden_states else None all_self_attentions = () if output_attentions else None conv_attention_mask = attention_mask if attention_mask is not None: # make sure padded tokens output 0 hidden_states = hidden_states.masked_fill(~attention_mask.bool().unsqueeze(-1), 0.0) # extend attention_mask attention_mask = 1.0 - attention_mask[:, None, None, :].to(dtype=hidden_states.dtype) attention_mask = attention_mask.expand( attention_mask.shape[0], 1, attention_mask.shape[-1], attention_mask.shape[-1] ) if self.config.speech_encoder_chunk_size is not None: attention_mask = self._apply_chunk_attention(attention_mask, hidden_states) if attention_mask is not None: attention_mask = attention_mask * torch.finfo(hidden_states.dtype).min hidden_states = self.dropout(hidden_states) synced_gpus = is_deepspeed_zero3_enabled() or is_fsdp_managed_module(self) for i, layer in enumerate(self.layers): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) dropout_probability = torch.rand([]) skip_the_layer = self.training and dropout_probability < self.config.speech_encoder_layerdrop if not skip_the_layer or synced_gpus: # under fsdp or deepspeed zero3 all gpus must run in sync layer_outputs = layer( hidden_states, attention_mask=attention_mask, output_attentions=output_attentions, conv_attention_mask=conv_attention_mask, ) hidden_states = layer_outputs[0] if skip_the_layer: layer_outputs = (None, None) if output_attentions: all_self_attentions = all_self_attentions + (layer_outputs[1],) hidden_states = self.layer_norm(hidden_states) if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple(v for v in [hidden_states, all_hidden_states, all_self_attentions] if v is not None) return BaseModelOutput( last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_self_attentions, ) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TConformerAdapterLayer with SeamlessM4T->SeamlessM4Tv2 class SeamlessM4Tv2ConformerAdapterLayer(nn.Module): def __init__(self, config): super().__init__() embed_dim = config.hidden_size dropout = config.adaptor_dropout self.kernel_size = config.adaptor_kernel_size self.stride = config.adaptor_stride # 1. residual convolution self.residual_layer_norm = nn.LayerNorm(embed_dim) self.residual_conv = nn.Conv1d( embed_dim, 2 * embed_dim, self.kernel_size, stride=self.stride, padding=self.stride // 2, ) self.activation = nn.GLU(dim=1) # Self-Attention self.self_attn_layer_norm = nn.LayerNorm(embed_dim) self.self_attn_conv = nn.Conv1d( embed_dim, 2 * embed_dim, self.kernel_size, stride=self.stride, padding=self.stride // 2, ) self.self_attn = SeamlessM4Tv2ConformerSelfAttention(config, use_position_embeddings=False) self.self_attn_dropout = nn.Dropout(dropout) # Feed-forward self.ffn_layer_norm = nn.LayerNorm(embed_dim) self.ffn = SeamlessM4Tv2ConformerFeedForward(config, act_fn="relu", dropout=dropout) def _compute_sub_sample_lengths_from_attention_mask(self, attention_mask): pad = self.kernel_size // 2 seq_lens = attention_mask.size(1) - (1 - attention_mask.int()).sum(1) seq_lens = ((seq_lens + 2 * pad - self.kernel_size) / self.stride) + 1 return seq_lens.floor() def forward( self, hidden_states, attention_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ): residual = self.residual_layer_norm(hidden_states) # Apply pooling to the residual to match the sequence length of the # multi-head attention output. # (batch, seq_len, feature_dim) -> (batch, feature_dim, seq_len) residual = residual.transpose(1, 2) residual = self.residual_conv(residual) residual = self.activation(residual) # (batch, feature_dim, seq_len) -> (batch, seq_len, feature_dim) residual = residual.transpose(1, 2) hidden_states = self.self_attn_layer_norm(hidden_states) # Apply pooling before feeding to the multihead-attention layer. # (batch, seq_len, feature_dim) -> (batch, feature_dim, seq_len) hidden_states = hidden_states.transpose(1, 2) hidden_states = self.self_attn_conv(hidden_states) hidden_states = self.activation(hidden_states) # (batch, feature_dim, seq_len) -> (batch, seq_len, feature_dim) hidden_states = hidden_states.transpose(1, 2) if attention_mask is not None: sub_sampled_lengths = self._compute_sub_sample_lengths_from_attention_mask(attention_mask).to( hidden_states.device ) attention_mask = _compute_new_attention_mask(hidden_states=hidden_states, seq_lens=sub_sampled_lengths) attention_mask = _prepare_4d_attention_mask( attention_mask, hidden_states.dtype, ) # The rest of the computation is identical to a vanilla Transformer # encoder layer. hidden_states, attn_weights = self.self_attn( hidden_states, attention_mask=attention_mask, output_attentions=output_attentions, ) hidden_states = self.self_attn_dropout(hidden_states) hidden_states = hidden_states + residual residual = hidden_states hidden_states = self.ffn_layer_norm(hidden_states) hidden_states = self.ffn(hidden_states) + residual return hidden_states # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TConformerAdapter with SeamlessM4T->SeamlessM4Tv2 class SeamlessM4Tv2ConformerAdapter(nn.Module): def __init__(self, config): super().__init__() self.layers = nn.ModuleList( SeamlessM4Tv2ConformerAdapterLayer(config) for _ in range(config.num_adapter_layers) ) def forward(self, hidden_states, attention_mask): # down project hidden_states if necessary for layer in self.layers: hidden_states = layer(hidden_states, attention_mask) return hidden_states ############ TEXT / UNITS related code ################ # Copied from transformers.models.m2m_100.modeling_m2m_100.M2M100ScaledWordEmbedding with M2M100->SeamlessM4Tv2 class SeamlessM4Tv2ScaledWordEmbedding(nn.Embedding): """ This module overrides nn.Embeddings' forward by multiplying with embeddings scale. """ def __init__(self, num_embeddings: int, embedding_dim: int, padding_idx: int, embed_scale: Optional[float] = 1.0): super().__init__(num_embeddings, embedding_dim, padding_idx) self.embed_scale = embed_scale def forward(self, input_ids: torch.Tensor): return super().forward(input_ids) * self.embed_scale # Copied from transformers.models.m2m_100.modeling_m2m_100.M2M100SinusoidalPositionalEmbedding class SeamlessM4Tv2SinusoidalPositionalEmbedding(nn.Module): """This module produces sinusoidal positional embeddings of any length.""" def __init__(self, num_positions: int, embedding_dim: int, padding_idx: Optional[int] = None): super().__init__() self.offset = 2 self.embedding_dim = embedding_dim self.padding_idx = padding_idx self.make_weights(num_positions + self.offset, embedding_dim, padding_idx) def make_weights(self, num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None): emb_weights = self.get_embedding(num_embeddings, embedding_dim, padding_idx) if hasattr(self, "weights"): # in forward put the weights on the correct dtype and device of the param emb_weights = emb_weights.to(dtype=self.weights.dtype, device=self.weights.device) self.register_buffer("weights", emb_weights, persistent=False) @staticmethod def get_embedding(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None): """ Build sinusoidal embeddings. This matches the implementation in tensor2tensor, but differs slightly from the description in Section 3.5 of "Attention Is All You Need". """ half_dim = embedding_dim // 2 emb = math.log(10000) / (half_dim - 1) emb = torch.exp(torch.arange(half_dim, dtype=torch.int64).float() * -emb) emb = torch.arange(num_embeddings, dtype=torch.int64).float().unsqueeze(1) * emb.unsqueeze(0) emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1).view(num_embeddings, -1) if embedding_dim % 2 == 1: # zero pad emb = torch.cat([emb, torch.zeros(num_embeddings, 1)], dim=1) if padding_idx is not None: emb[padding_idx, :] = 0 return emb.to(torch.get_default_dtype()) @torch.no_grad() def forward( self, input_ids: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, past_key_values_length: int = 0, ): if input_ids is not None: bsz, seq_len = input_ids.size() # Create the position ids from the input token ids. Any padded tokens remain padded. position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length).to( input_ids.device ) else: bsz, seq_len = inputs_embeds.size()[:-1] position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds, past_key_values_length) # expand embeddings if needed max_pos = self.padding_idx + 1 + seq_len + past_key_values_length if max_pos > self.weights.size(0): self.make_weights(max_pos + self.offset, self.embedding_dim, self.padding_idx) return self.weights.index_select(0, position_ids.view(-1)).view(bsz, seq_len, self.weights.shape[-1]).detach() def create_position_ids_from_inputs_embeds(self, inputs_embeds, past_key_values_length): """ We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids. Args: inputs_embeds: torch.Tensor Returns: torch.Tensor """ input_shape = inputs_embeds.size()[:-1] sequence_length = input_shape[1] position_ids = torch.arange( self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device ) return position_ids.unsqueeze(0).expand(input_shape).contiguous() + past_key_values_length class SeamlessM4Tv2Attention(nn.Module): """Multi-headed attention from 'Attention Is All You Need' paper""" # Copied from transformers.models.bart.modeling_bart.BartAttention.__init__ with Bart->SeamlessM4Tv2 def __init__( self, embed_dim: int, num_heads: int, dropout: float = 0.0, is_decoder: bool = False, bias: bool = True, is_causal: bool = False, config: Optional[SeamlessM4Tv2Config] = None, layer_idx: Optional[int] = None, ): super().__init__() self.embed_dim = embed_dim self.num_heads = num_heads self.dropout = dropout self.head_dim = embed_dim // num_heads self.config = config if (self.head_dim * num_heads) != self.embed_dim: raise ValueError( f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}" f" and `num_heads`: {num_heads})." ) self.scaling = self.head_dim**-0.5 self.is_decoder = is_decoder self.is_causal = is_causal self.layer_idx = layer_idx if layer_idx is None and self.is_decoder: logger.warning_once( f"Instantiating a decoder {self.__class__.__name__} without passing `layer_idx` is not recommended and " "will lead to errors during the forward call, if caching is used. Please make sure to provide a `layer_idx` " "when creating this class." ) self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias) self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias) self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias) self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias) @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58") def forward( self, hidden_states: torch.Tensor, encoder_hidden_states: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, attention_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, cache_position: Optional[torch.Tensor] = None, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: """Input shape: Batch x Time x Channel""" is_cross_attention = encoder_hidden_states is not None batch_size, seq_length = hidden_states.shape[:2] if past_key_values is not None: if isinstance(past_key_values, EncoderDecoderCache): is_updated = past_key_values.is_updated.get(self.layer_idx) if is_cross_attention: # after the first generated id, we can subsequently re-use all key/value_states from cache curr_past_key_value = past_key_values.cross_attention_cache else: curr_past_key_value = past_key_values.self_attention_cache else: curr_past_key_value = past_key_values current_states = encoder_hidden_states if is_cross_attention else hidden_states if is_cross_attention and past_key_values is not None and is_updated: # reuse k,v, cross_attentions key_states = curr_past_key_value.layers[self.layer_idx].keys value_states = curr_past_key_value.layers[self.layer_idx].values else: key_states = self.k_proj(current_states) value_states = self.v_proj(current_states) key_states = key_states.view(batch_size, -1, self.num_heads, self.head_dim).transpose(1, 2) value_states = value_states.view(batch_size, -1, self.num_heads, self.head_dim).transpose(1, 2) if past_key_values is not None: # save all key/value_states to cache to be re-used for fast auto-regressive generation cache_position = cache_position if not is_cross_attention else None key_states, value_states = curr_past_key_value.update( key_states, value_states, self.layer_idx, {"cache_position": cache_position} ) # set flag that curr layer for cross-attn is already updated so we can re-use in subsequent calls if is_cross_attention: past_key_values.is_updated[self.layer_idx] = True query_states = self.q_proj(hidden_states) query_states = query_states.reshape(batch_size, seq_length, self.num_heads, self.head_dim).transpose(1, 2) query_states = query_states * self.scaling attention_scores = torch.matmul(query_states, key_states.transpose(-1, -2)) if attention_mask is not None: attention_scores = attention_scores + attention_mask # (batch_size, n_heads, seq_length, key_length) attn_weights = nn.functional.softmax(attention_scores.float(), dim=-1).type_as(attention_scores) attn_weights = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training) # attn_output = torch.bmm(attn_probs, value_states) ? context_states = torch.matmul(attn_weights, value_states) # attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim) ? context_states = context_states.permute(0, 2, 1, 3).contiguous().view(batch_size, seq_length, -1) attn_output = self.out_proj(context_states) return attn_output, attn_weights # Copied from transformers.models.nllb_moe.modeling_nllb_moe.NllbMoeDenseActDense with NllbMoe->SeamlessM4Tv2,DenseActDense->FeedForwardNetwork, d_model->hidden_size class SeamlessM4Tv2FeedForwardNetwork(nn.Module): def __init__(self, config: SeamlessM4Tv2Config, ffn_dim: int): super().__init__() self.fc1 = nn.Linear(config.hidden_size, ffn_dim) self.fc2 = nn.Linear(ffn_dim, config.hidden_size) self.dropout = nn.Dropout(config.activation_dropout) self.act = ACT2FN[config.activation_function] def forward(self, hidden_states): hidden_states = self.fc1(hidden_states) hidden_states = self.act(hidden_states) hidden_states = self.dropout(hidden_states) if ( isinstance(self.fc2.weight, torch.Tensor) and hidden_states.dtype != self.fc2.weight.dtype and (self.fc2.weight.dtype != torch.int8 and self.fc2.weight.dtype != torch.uint8) ): hidden_states = hidden_states.to(self.fc2.weight.dtype) hidden_states = self.fc2(hidden_states) return hidden_states # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TEncoderLayer with SeamlessM4T->SeamlessM4Tv2 class SeamlessM4Tv2EncoderLayer(GradientCheckpointingLayer): def __init__(self, config: SeamlessM4Tv2Config, encoder_ffn_dim=None, encoder_attention_heads=None): super().__init__() encoder_ffn_dim = config.encoder_ffn_dim if encoder_ffn_dim is None else encoder_ffn_dim encoder_attention_heads = ( config.encoder_attention_heads if encoder_attention_heads is None else encoder_attention_heads ) self.embed_dim = config.hidden_size self.self_attn = SeamlessM4Tv2Attention( embed_dim=self.embed_dim, num_heads=encoder_attention_heads, dropout=config.attention_dropout, ) self.attn_dropout = nn.Dropout(config.dropout) self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim) self.ffn = SeamlessM4Tv2FeedForwardNetwork(config, ffn_dim=encoder_ffn_dim) self.ffn_layer_norm = nn.LayerNorm(config.hidden_size) self.ffn_dropout = nn.Dropout(config.activation_dropout) def forward( self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, output_attentions: bool = False, ) -> torch.Tensor: """ Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. """ residual = hidden_states hidden_states = self.self_attn_layer_norm(hidden_states) hidden_states, attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, output_attentions=output_attentions, ) hidden_states = self.attn_dropout(hidden_states) hidden_states = residual + hidden_states residual = hidden_states hidden_states = self.ffn_layer_norm(hidden_states) hidden_states = self.ffn(hidden_states) hidden_states = self.ffn_dropout(hidden_states) hidden_states = residual + hidden_states outputs = (hidden_states,) if output_attentions: outputs += (attn_weights,) return outputs # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TDecoderLayer with SeamlessM4T->SeamlessM4Tv2 class SeamlessM4Tv2DecoderLayer(GradientCheckpointingLayer): def __init__( self, config: SeamlessM4Tv2Config, decoder_ffn_dim=None, decoder_attention_heads=None, layer_idx=None ): super().__init__() decoder_ffn_dim = config.decoder_ffn_dim if decoder_ffn_dim is None else decoder_ffn_dim decoder_attention_heads = ( config.decoder_attention_heads if decoder_attention_heads is None else decoder_attention_heads ) self.embed_dim = config.hidden_size self.self_attn = SeamlessM4Tv2Attention( embed_dim=self.embed_dim, num_heads=decoder_attention_heads, dropout=config.attention_dropout, is_decoder=True, layer_idx=layer_idx, ) self.dropout = config.dropout self.activation_fn = ACT2FN[config.activation_function] self.attn_dropout = nn.Dropout(config.dropout) self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim) self.cross_attention = SeamlessM4Tv2Attention( self.embed_dim, decoder_attention_heads, config.attention_dropout, is_decoder=True, layer_idx=layer_idx, ) self.cross_attention_layer_norm = nn.LayerNorm(self.embed_dim) self.ffn = SeamlessM4Tv2FeedForwardNetwork(config, ffn_dim=decoder_ffn_dim) self.ffn_layer_norm = nn.LayerNorm(config.hidden_size) self.ffn_dropout = nn.Dropout(config.activation_dropout) @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58") def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.Tensor] = None, encoder_attention_mask: Optional[torch.Tensor] = None, past_key_values: Optional[Cache] = None, output_attentions: Optional[bool] = False, use_cache: Optional[bool] = True, cache_position: Optional[torch.Tensor] = None, ) -> torch.Tensor: """ Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. encoder_hidden_states (`torch.FloatTensor`): cross attention input to the layer of shape `(batch, seq_len, embed_dim)` encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. past_key_values (`Tuple(torch.FloatTensor)`): cached past key and value projection states output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. """ residual = hidden_states hidden_states = self.self_attn_layer_norm(hidden_states) # Self Attention hidden_states, self_attn_weights = self.self_attn( hidden_states=hidden_states, past_key_values=past_key_values, attention_mask=attention_mask, output_attentions=output_attentions, cache_position=cache_position, ) hidden_states = self.attn_dropout(hidden_states) hidden_states = residual + hidden_states # Cross-Attention Block cross_attn_weights = None if encoder_hidden_states is not None: residual = hidden_states hidden_states = self.cross_attention_layer_norm(hidden_states) hidden_states, cross_attn_weights = self.cross_attention( hidden_states=hidden_states, encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values, attention_mask=encoder_attention_mask, output_attentions=output_attentions, cache_position=cache_position, ) hidden_states = self.attn_dropout(hidden_states) hidden_states = residual + hidden_states # Fully Connected residual = hidden_states hidden_states = self.ffn_layer_norm(hidden_states) hidden_states = self.ffn(hidden_states) hidden_states = self.ffn_dropout(hidden_states) hidden_states = residual + hidden_states return hidden_states, self_attn_weights, cross_attn_weights class SeamlessM4Tv2TextToUnitDecoderLayer(GradientCheckpointingLayer): def __init__(self, config: SeamlessM4Tv2Config, decoder_ffn_dim=None, decoder_attention_heads=None): super().__init__() decoder_ffn_dim = config.decoder_ffn_dim if decoder_ffn_dim is None else decoder_ffn_dim decoder_attention_heads = ( config.decoder_attention_heads if decoder_attention_heads is None else decoder_attention_heads ) self.dropout = config.dropout self.embed_dim = config.hidden_size self.self_attn = SeamlessM4Tv2Attention( embed_dim=self.embed_dim, num_heads=decoder_attention_heads, dropout=config.attention_dropout, is_decoder=True, ) self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim) self.conv1 = nn.Conv1d(self.embed_dim, self.embed_dim, kernel_size=7, stride=1, padding="same") self.activation_fn = ACT2FN[config.activation_function] self.conv2 = nn.Conv1d(self.embed_dim, self.embed_dim, kernel_size=7, stride=1, padding="same") self.conv_layer_norm = nn.LayerNorm(config.hidden_size) self.conv_dropout = nn.Dropout(self.dropout) def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, padding_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = False, ) -> torch.Tensor: """ Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. padding_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Indicates which inputs are to be ignored due to padding, where elements are either 1 for *not masked* or 0 for *masked* output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. """ residual = hidden_states # Self Attention hidden_states, self_attn_weights = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, output_attentions=output_attentions, ) hidden_states = residual + hidden_states hidden_states = self.self_attn_layer_norm(hidden_states) # Conv residual = hidden_states # Apply padding mask to avoid leaking padded positions in the convolution layer if padding_mask is not None: hidden_states = hidden_states.masked_fill(~padding_mask.bool().unsqueeze(-1), 0.0) hidden_states = self.conv1(hidden_states.transpose(1, 2)).transpose(1, 2) if padding_mask is not None: hidden_states = hidden_states.masked_fill(~padding_mask.bool().unsqueeze(-1), 0.0) hidden_states = self.activation_fn(hidden_states) hidden_states = self.conv2(hidden_states.transpose(1, 2)).transpose(1, 2) hidden_states = self.conv_dropout(hidden_states) hidden_states = residual + hidden_states hidden_states = self.conv_layer_norm(hidden_states) return hidden_states, self_attn_weights ############ SUB-MODELS related code ################ @auto_docstring class SeamlessM4Tv2PreTrainedModel(PreTrainedModel): config: SeamlessM4Tv2Config base_model_prefix = "seamless_m4t_v2" supports_gradient_checkpointing = True _no_split_modules = [ "SeamlessM4Tv2EncoderLayer", "SeamlessM4Tv2DecoderLayer", "SeamlessM4Tv2ConformerEncoderLayer", "SeamlessM4Tv2TextToUnitDecoderLayer", ] def _init_weights(self, module: nn.Module): """Initialize the weights""" std = self.config.initializer_range if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=std) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=std) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() elif isinstance(module, SeamlessM4Tv2ConformerSelfAttention): if hasattr(module, "pos_bias_u"): nn.init.xavier_uniform_(module.pos_bias_u) if hasattr(module, "pos_bias_v"): nn.init.xavier_uniform_(module.pos_bias_v) elif isinstance(module, SeamlessM4Tv2ConformerFeatureProjection): k = math.sqrt(1 / module.projection.in_features) nn.init.uniform_(module.projection.weight, a=-k, b=k) nn.init.uniform_(module.projection.bias, a=-k, b=k) elif isinstance(module, SeamlessM4Tv2TextToUnitDecoder): module.pos_emb_alpha_char.data.fill_(1) module.pos_emb_alpha.data.fill_(1) elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) elif isinstance(module, (nn.Conv1d, nn.ConvTranspose1d)): nn.init.kaiming_normal_(module.weight) if module.bias is not None: k = math.sqrt(module.groups / (module.in_channels * module.kernel_size[0])) nn.init.uniform_(module.bias, a=-k, b=k) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TPreTrainedModel._compute_sub_sample_lengths_from_attention_mask def _compute_sub_sample_lengths_from_attention_mask(self, attention_mask): kernel_size, stride = self.config.adaptor_kernel_size, self.config.adaptor_stride pad = kernel_size // 2 seq_lens = attention_mask.size(1) - (1 - attention_mask.int()).sum(1) seq_lens = ((seq_lens + 2 * pad - kernel_size) / stride) + 1 return seq_lens.floor() def _indices_to_subwords(self, input_ids): """ Returns the corresponding text string for each input id. """ if not hasattr(self.generation_config, "id_to_text"): raise ValueError( """This model generation config doesn't have a `id_to_text` key which maps token ids to subwords. Make sure to load the right generation config.""" ) batch_size, sequence_len = input_ids.shape subwords_batch = [] for batch_id in range(batch_size): subwords = [] for i in range(sequence_len): subword = self.generation_config.id_to_text.get(str(input_ids[batch_id, i].item())) subwords.append(str(subword)) subwords_batch.append(subwords) return subwords_batch def _count_character_length_in_subword( self, input_ids, subwords_batch, merge_space_with_prev_subword=False, pad_token_id=0, unk_token_id=1, space="▁", ): """ Counts the number of characters per text string associated with the input token id. Args: input_ids (`torch.Tensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. subwords_batch (`list[list[str]]` of shape `(batch_size, sequence_length)`): Corresponding text string for each input id. merge_space_with_prev_subword (`bool`, *optional*, defaults to `False`): Indicates if the space character is merged with the previous subword. If `False`, it will be merged with the next subword. pad_token_id (`int`, *optional*, defaults to 0): The id of the _padding_ text token. If it is encountered when calculating the length of a subword sample, the lengths of subsequent subwords will be set to 0. unk_token_id (`int`, *optional*, defaults to 1): The id of the _unknown_ text token. Associated to a subword of length 1. space (`str`, *optional*, defaults to `"▁"`): The space character. """ batch_size, _ = input_ids.shape char_count_per_id = input_ids.new_zeros(input_ids.size()) subword_lens = input_ids.ne(pad_token_id).sum(1) for batch_id in range(batch_size): # We slice out the tensor till the padding index. subword_indices = input_ids[batch_id, : subword_lens[batch_id]] subwords = subwords_batch[batch_id][: subword_lens[batch_id]] is_next_start_with_space = [ len(subwords[i + 1]) > 1 and subwords[i + 1][0] == space if i < len(subwords) - 1 else False for i in range(len(subwords)) ] is_punc = [ len(subwords[i]) == 1 and not subwords[i].isalpha() and not subwords[i].isnumeric() and subwords[i] != space for i in range(len(subwords)) ] for i, (subword_idx, subword) in enumerate(zip(subword_indices, subwords)): if subword_idx == pad_token_id: break if subword_idx == unk_token_id: # We set char_len to 1 for an unk token. char_len = 1 if merge_space_with_prev_subword and is_next_start_with_space[i]: char_len += 1 else: # By default, spaces are merged with the next subword. # char_len includes the space. char_len = len(subword) if merge_space_with_prev_subword: # Add the space for the next subword. if is_next_start_with_space[i]: char_len += 1 # Subtract the space for the current subword. if i > 0 and is_next_start_with_space[i - 1]: char_len -= 1 else: # Merge space with punctuation mark by default. if is_punc[i] and is_next_start_with_space[i]: char_len += 1 # Subtract the space for the subword succeeding the punctuation mark. elif i > 0 and is_punc[i - 1] and is_next_start_with_space[i - 1]: char_len -= 1 char_count_per_id[batch_id, i] = char_len return char_count_per_id def _get_char_input_ids(self, input_ids, subwords_batch, char_count_per_id, pad_token_id=0, unk_token_id=1): """ Returns the corresponding character input id for each character of `subwords_batch`. Args: input_ids (`torch.Tensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. subwords_batch (`list[list[str]]` of shape `(batch_size, sequence_length)`): Corresponding text string for each input id. char_count_per_id (`torch.Tensor` of shape `(batch_size, sequence_length)`): Number of characters per input id. pad_token_id (`int`, *optional*, defaults to 0): The id of the _padding_ text token. If it is encountered when calculating the length of a subword sample, the lengths of subsequent subwords will be set to 0. unk_token_id (`int`, *optional*, defaults to 1): The id of the _unknown_ text token. Associated to a subword of length 1. Returns: `torch.Tensor`: Tensor of shape `(batch_size, char_sequence_length)` containing the id of each character. """ if not hasattr(self.generation_config, "char_to_id"): raise ValueError( """This model generation config doesn't have a `char_to_id` key which maps characters to character ids. Make sure to load the right generation config.""" ) batch_size = input_ids.shape[0] max_len = int(char_count_per_id.sum(1).max().item()) char_seqs = input_ids.new_zeros((batch_size, max_len)).fill_(pad_token_id) subword_lens = input_ids.ne(pad_token_id).sum(1) for batch_id in range(batch_size): total = 0 subword_indices = input_ids[batch_id, : subword_lens[batch_id]] subwords = subwords_batch[batch_id][: subword_lens[batch_id]] for subword_idx, subword in zip(subword_indices, subwords): if subword_idx == unk_token_id: char_ids = [unk_token_id] else: # Get char token indices corresponding to the subwords. char_ids = [self.generation_config.char_to_id.get(ch, unk_token_id) for ch in list(subword)] char_seq_len = len(char_ids) char_seqs[batch_id, total : total + char_seq_len] = torch.tensor(char_ids).to(char_seqs) total += char_seq_len return char_seqs def _hard_upsample(self, hidden_states, durations): """ Repeats the time dimension of each sample in the batch based on the corresponding duration. Args: hidden_states (`torch.Tensor` of shape `(batch_size, sequence_length, *)`, *optional*): The sequence to repeat, where `*` is any number of sequence-specific dimensions including none. durations (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Indicates how many times to repeat time segments. """ if hidden_states.size(0) == 1: hidden_states = torch.repeat_interleave(hidden_states, durations.view(-1), dim=1) else: # if batched sample, need to interleave per sample, and pad -> loss of parallelism if hidden_states.shape[0] > 1 and self.training: logger.warning_once( """`self.training=True` and you use batching. You lose parallelism during the hifigan forward pass because the samples are interleaved.""" ) hidden_states = [ torch.repeat_interleave(hidden_state, duration, dim=0) for (hidden_state, duration) in zip(hidden_states, durations) ] hidden_states = nn.utils.rnn.pad_sequence(hidden_states, batch_first=True) return hidden_states @auto_docstring( custom_intro=""" Transformer speech encoder consisting of *config.speech_encoder_layers* conformer self attention layers. Each layer is a [`SeamlessM4Tv2ConformerEncoderLayer`]. """ ) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TSpeechEncoder with SeamlessM4T->SeamlessM4Tv2 class SeamlessM4Tv2SpeechEncoder(SeamlessM4Tv2PreTrainedModel): main_input_name = "input_features" def __init__(self, config: SeamlessM4Tv2Config): super().__init__(config) self.feature_projection = SeamlessM4Tv2ConformerFeatureProjection(config) self.encoder = SeamlessM4Tv2ConformerEncoder(config) self.intermediate_ffn = SeamlessM4Tv2ConformerFeedForward(config, act_fn="relu", dropout=0.0) self.adapter = SeamlessM4Tv2ConformerAdapter(config) if config.add_adapter else None self.inner_layer_norm = nn.LayerNorm(config.hidden_size) # Initialize weights and apply final processing self.post_init() def forward( self, input_features: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, **kwargs, ) -> Union[tuple, Wav2Vec2BaseModelOutput]: output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict if input_features is None: raise ValueError( """Both `input_features` and `inputs_embeds` are `None` in `SeamlessM4Tv2SpeechEncoder.forward`. Make sure one of them is not `None`.""" ) hidden_states = self.feature_projection(input_features) encoder_outputs = self.encoder( hidden_states, attention_mask=attention_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) hidden_states = encoder_outputs[0] expanded_hidden_states = self.intermediate_ffn(hidden_states) hidden_states = hidden_states + 0.5 * expanded_hidden_states if self.adapter is not None: hidden_states = self.adapter(hidden_states, attention_mask=attention_mask) hidden_states = self.inner_layer_norm(hidden_states) if not return_dict: return (hidden_states,) + encoder_outputs[1:] return Wav2Vec2BaseModelOutput( last_hidden_state=hidden_states, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, ) # inspired from MBart and NllbMoe @auto_docstring( custom_intro=""" Transformer encoder consisting of *config.encoder_layers* self attention layers. Each layer is a [`SeamlessM4Tv2EncoderLayer`]. """ ) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TEncoder with SeamlessM4T->SeamlessM4Tv2 class SeamlessM4Tv2Encoder(SeamlessM4Tv2PreTrainedModel): def __init__( self, config: SeamlessM4Tv2Config, embed_tokens: Optional[nn.Embedding] = None, is_t2u_encoder: bool = False, ): r""" embed_tokens (`nn.Embedding`, *optional*): Input embedding is_t2u_encoder (`bool`, *optional*, defaults to `False`): indicates if it belongs to the text-to-units model, in which case it won't have input embeddings """ super().__init__(config) self.dropout = config.dropout self.layerdrop = config.encoder_layerdrop self.padding_idx = config.pad_token_id embed_dim = config.hidden_size self.is_t2u_encoder = is_t2u_encoder self.max_source_positions = config.max_position_embeddings if not self.is_t2u_encoder: embed_scale = math.sqrt(embed_dim) if config.scale_embedding else 1.0 self.embed_tokens = SeamlessM4Tv2ScaledWordEmbedding( config.vocab_size, embed_dim, self.padding_idx, embed_scale=embed_scale ) if embed_tokens is not None: self.embed_tokens.weight = embed_tokens.weight self.embed_positions = SeamlessM4Tv2SinusoidalPositionalEmbedding( self.max_source_positions, embed_dim, self.padding_idx, ) layers = [] for _ in range(config.encoder_layers): layers.append( SeamlessM4Tv2EncoderLayer( config, encoder_attention_heads=config.encoder_attention_heads, encoder_ffn_dim=config.encoder_ffn_dim, ) ) self.layers = nn.ModuleList(layers) self.layer_norm = nn.LayerNorm(config.hidden_size) self.gradient_checkpointing = False # Initialize weights and apply final processing self.post_init() def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, **kwargs, ) -> Union[tuple, BaseModelOutput]: r""" Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict if input_ids is not None and self.is_t2u_encoder: raise ValueError( "You cannot pass input_ids to the encoder of the text_to_units model. Pass inputs_embeds instead." ) # retrieve input_ids and inputs_embeds if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: input = input_ids input_shape = input.shape input_ids = input_ids.view(-1, input_shape[-1]) elif inputs_embeds is not None: input = inputs_embeds[:, :, -1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") if inputs_embeds is None: inputs_embeds = self.embed_tokens(input_ids) if not self.is_t2u_encoder: embed_pos = self.embed_positions(input) hidden_states = inputs_embeds + embed_pos.to(inputs_embeds.device) else: hidden_states = inputs_embeds hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) # expand attention_mask if attention_mask is not None: # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] attention_mask = _prepare_4d_attention_mask(attention_mask, inputs_embeds.dtype) encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) to_drop = False if self.training: dropout_probability = torch.rand([]) if dropout_probability < self.layerdrop: # skip the layer to_drop = True if to_drop: layer_outputs = (None, None) else: layer_outputs = encoder_layer( hidden_states, attention_mask, output_attentions=output_attentions, ) hidden_states = layer_outputs[0] if output_attentions: all_attentions = all_attentions + (layer_outputs[1],) hidden_states = self.layer_norm(hidden_states) if output_hidden_states: encoder_states = encoder_states + (hidden_states,) if not return_dict: return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None) return BaseModelOutput( last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions ) @auto_docstring( custom_intro=""" Transformer decoder consisting of *config.decoder_layers* layers. Each layer is a [`SeamlessM4Tv2DecoderLayer`]. """ ) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TDecoder with SeamlessM4T->SeamlessM4Tv2 class SeamlessM4Tv2Decoder(SeamlessM4Tv2PreTrainedModel): def __init__( self, config: SeamlessM4Tv2Config, embed_tokens: Optional[nn.Embedding] = None, ): r""" embed_tokens (`nn.Embedding`, *optional*): Input embedding """ super().__init__(config) self.dropout = config.dropout self.layerdrop = config.decoder_layerdrop self.padding_idx = config.pad_token_id self.vocab_size = config.vocab_size self.max_target_positions = config.max_position_embeddings embed_scale = math.sqrt(config.hidden_size) if config.scale_embedding else 1.0 if embed_tokens is not None: # if embed_tokens defined, use its shape instead self.embed_tokens = SeamlessM4Tv2ScaledWordEmbedding( embed_tokens.num_embeddings, embed_tokens.embedding_dim, self.padding_idx, embed_scale=embed_scale ) self.embed_tokens.weight = embed_tokens.weight else: self.embed_tokens = SeamlessM4Tv2ScaledWordEmbedding( self.vocab_size, config.hidden_size, self.padding_idx, embed_scale=embed_scale ) self.embed_positions = SeamlessM4Tv2SinusoidalPositionalEmbedding( self.max_target_positions, config.hidden_size, padding_idx=self.padding_idx, ) layers = [] for i in range(config.decoder_layers): layers.append( SeamlessM4Tv2DecoderLayer( config, decoder_attention_heads=config.decoder_attention_heads, decoder_ffn_dim=config.decoder_ffn_dim, layer_idx=i, ) ) self.layers = nn.ModuleList(layers) self.layer_norm = nn.LayerNorm(config.hidden_size) self.gradient_checkpointing = False # Initialize weights and apply final processing self.post_init() @auto_docstring def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.LongTensor] = None, past_key_values: Optional[tuple[tuple[torch.FloatTensor]]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, cache_position: Optional[torch.Tensor] = None, ) -> Union[tuple, BaseModelOutputWithPastAndCrossAttentions]: output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict # retrieve input_ids and inputs_embeds if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time") elif input_ids is not None: input = input_ids input_shape = input.size() input_ids = input_ids.view(-1, input_shape[-1]) elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] input = inputs_embeds[:, :, -1] else: raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds") if inputs_embeds is None: inputs_embeds = self.embed_tokens(input_ids) if self.gradient_checkpointing and self.training: if use_cache: logger.warning_once( "`use_cache=True` is incompatible with gradient checkpointing`. Setting `use_cache=False`..." ) use_cache = False # initialize `past_key_values` if use_cache and past_key_values is None: past_key_values = EncoderDecoderCache(DynamicCache(), DynamicCache()) if use_cache and isinstance(past_key_values, tuple): logger.warning_once( "Passing a tuple of `past_key_values` is deprecated and will be removed in Transformers v4.58.0. " "You should pass an instance of `EncoderDecoderCache` instead, e.g. " "`past_key_values=EncoderDecoderCache.from_legacy_cache(past_key_values)`." ) past_key_values = EncoderDecoderCache.from_legacy_cache(past_key_values) past_key_values_length = past_key_values.get_seq_length() if past_key_values is not None else 0 attention_mask = _prepare_4d_causal_attention_mask( attention_mask, input_shape, inputs_embeds, past_key_values_length ) # expand encoder attention mask if encoder_hidden_states is not None and encoder_attention_mask is not None: # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask( encoder_attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1] ) # embed positions positions = self.embed_positions(input, past_key_values_length=past_key_values_length) hidden_states = inputs_embeds + positions.to(inputs_embeds.device) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) # decoder layers all_hidden_states = () if output_hidden_states else None all_self_attns = () if output_attentions else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: all_hidden_states += (hidden_states,) if self.training: dropout_probability = torch.rand([]) if dropout_probability < self.layerdrop: continue layer_outputs = decoder_layer( hidden_states, attention_mask, encoder_hidden_states, # as a positional argument for gradient checkpointing encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, output_attentions=output_attentions, use_cache=use_cache, cache_position=cache_position, ) hidden_states = layer_outputs[0] if output_attentions: all_self_attns += (layer_outputs[1],) if encoder_hidden_states is not None: all_cross_attentions += (layer_outputs[2],) hidden_states = self.layer_norm(hidden_states) # add hidden states from the last decoder layer if output_hidden_states: all_hidden_states += (hidden_states,) if not return_dict: return tuple( v for v in [hidden_states, past_key_values, all_hidden_states, all_self_attns, all_cross_attentions] if v is not None ) return BaseModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, past_key_values=past_key_values, hidden_states=all_hidden_states, attentions=all_self_attns, cross_attentions=all_cross_attentions, ) @auto_docstring( custom_intro=""" Transformer decoder consisting of *config.decoder_layers* layers. Each layer is a [`SeamlessM4Tv2DecoderLayer`]. """ ) class SeamlessM4Tv2TextToUnitDecoder(SeamlessM4Tv2PreTrainedModel): def __init__( self, config: SeamlessM4Tv2Config, embed_tokens: Optional[nn.Embedding] = None, ): r""" embed_tokens (`nn.Embedding`, *optional*): Input embedding """ super().__init__(config) self.dropout = config.dropout self.layerdrop = config.decoder_layerdrop self.padding_idx = config.pad_token_id self.vocab_size = config.vocab_size self.max_target_positions = config.max_position_embeddings self.embed_scale = math.sqrt(config.hidden_size) if config.scale_embedding else 1.0 if embed_tokens is not None: # if embed_tokens defined, use its shape instead self.embed_tokens = nn.Embedding(embed_tokens.num_embeddings, embed_tokens.embedding_dim, self.padding_idx) self.embed_tokens.weight = embed_tokens.weight else: self.embed_tokens = nn.Embedding(self.vocab_size, config.hidden_size, self.padding_idx) self.embed_char = nn.Embedding(config.char_vocab_size, config.hidden_size) self.embed_char_positions = SeamlessM4Tv2SinusoidalPositionalEmbedding( self.max_target_positions, config.hidden_size, padding_idx=self.padding_idx, ) self.pos_emb_alpha_char = nn.Parameter(torch.ones(1)) self.pos_emb_alpha = nn.Parameter(torch.ones(1)) self.duration_predictor = SeamlessM4Tv2VariancePredictor( config.variance_predictor_embed_dim, config.variance_predictor_hidden_dim, config.variance_predictor_kernel_size, config.variance_pred_dropout, ) self.embed_positions = SeamlessM4Tv2SinusoidalPositionalEmbedding( self.max_target_positions, config.hidden_size, padding_idx=self.padding_idx, ) layers = [] for _ in range(config.decoder_layers): layers.append( SeamlessM4Tv2TextToUnitDecoderLayer( config, decoder_attention_heads=config.decoder_attention_heads, decoder_ffn_dim=config.decoder_ffn_dim, ) ) self.layers = nn.ModuleList(layers) self.layer_norm = nn.LayerNorm(config.hidden_size) self.gradient_checkpointing = False # Initialize weights and apply final processing self.post_init() def forward( self, char_input_ids: Optional[torch.LongTensor] = None, char_count_per_id: Optional[torch.LongTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[tuple, SeamlessM4Tv2TextToUnitDecoderOutput]: r""" Args: char_input_ids (`torch.LongTensor` of shape `(batch_size, char_sequence_length)`): Character indices. The correspondence between characters and indices can be found in `char_to_id`, a dictionary in the generation configuration. char_count_per_id (`torch.Tensor` of shape `(batch_size, encoder_sequence_length)`): Number of characters per text input id. encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for more detail. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict # create padding mask for character lengths char_padding_mask = _compute_new_attention_mask(char_input_ids, char_count_per_id.sum(1)) # upsample hidden states according to characters sequence lengths char_hidden_states = self._hard_upsample(encoder_hidden_states, char_count_per_id) # embed char positions char_positions = self.pos_emb_alpha_char * self.embed_char_positions(inputs_embeds=char_hidden_states) # update char hidden states with positions and char embeddings char_hidden_states = self.embed_char(char_input_ids) * self.embed_scale + char_positions + char_hidden_states # predict duration log_dur_pred = self.duration_predictor(char_hidden_states, padding_mask=char_padding_mask) dur_out = torch.clamp(torch.round(torch.expm1(log_dur_pred)).long(), min=1) dur_out = dur_out.masked_fill(~char_padding_mask.bool(), 0.0) # upsample char hidden states according to predicted duration char_hidden_states = self._hard_upsample(char_hidden_states, dur_out) positions = self.pos_emb_alpha * self.embed_positions(inputs_embeds=char_hidden_states) hidden_states = char_hidden_states + positions padding_mask = _compute_new_attention_mask(hidden_states, dur_out.sum(1)) attention_mask = _prepare_4d_attention_mask(padding_mask, hidden_states.dtype) hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) # decoder layers all_hidden_states = () if output_hidden_states else None all_self_attns = () if output_attentions else None for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) if output_hidden_states: all_hidden_states += (hidden_states,) if self.training: dropout_probability = torch.rand([]) if dropout_probability < self.layerdrop: continue layer_outputs = decoder_layer( hidden_states, attention_mask=attention_mask, padding_mask=padding_mask, output_attentions=output_attentions, ) hidden_states = layer_outputs[0] if output_attentions: all_self_attns += (layer_outputs[2],) hidden_states = self.layer_norm(hidden_states) # add hidden states from the last decoder layer if output_hidden_states: all_hidden_states += (hidden_states,) if not return_dict: return tuple(v for v in [hidden_states, all_hidden_states, all_self_attns, padding_mask] if v is not None) return SeamlessM4Tv2TextToUnitDecoderOutput( last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_self_attns, padding_mask=padding_mask, ) @auto_docstring( custom_intro=""" Transformer bare text-to-unit encoder-decoder. The encoder is a [`SeamlessM4Tv2Encoder`] without embeddings and the decoder is a [`SeamlessM4Tv2TextToUnitDecoder`]. """ ) class SeamlessM4Tv2TextToUnitModel(SeamlessM4Tv2PreTrainedModel): # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TTextToUnitModel.__init__ with SeamlessM4T->SeamlessM4Tv2, Decoder->TextToUnitDecoder def __init__( self, config: SeamlessM4Tv2Config, embed_tokens_decoder: Optional[nn.Embedding] = None, ): r""" embed_tokens_decoder (`nn.Embedding`, *optional*): input embedding of the decoder. """ super().__init__(config) self.encoder = SeamlessM4Tv2Encoder(config, is_t2u_encoder=True) self.decoder = SeamlessM4Tv2TextToUnitDecoder(config, embed_tokens_decoder) # Initialize weights and apply final processing self.post_init() def forward( self, input_ids: Optional[torch.LongTensor] = None, char_input_ids: Optional[torch.LongTensor] = None, char_count_per_id: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[tuple[torch.Tensor], Seq2SeqModelOutput]: output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict if encoder_outputs is None: encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): encoder_outputs = BaseModelOutput( last_hidden_state=encoder_outputs[0], hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, ) # decoder outputs consists of (dec_features, dec_hidden, dec_attn, padding_mask) decoder_outputs = self.decoder( char_input_ids=char_input_ids, char_count_per_id=char_count_per_id, encoder_hidden_states=encoder_outputs[0], output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) if not return_dict: return decoder_outputs + encoder_outputs return SeamlessM4Tv2TextToUnitOutput( last_hidden_state=decoder_outputs.last_hidden_state, padding_mask=decoder_outputs.padding_mask, decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, encoder_attentions=encoder_outputs.attentions, ) @auto_docstring( custom_intro=""" Transformer text-to-unit encoder-decoder with a language model head. The base encoder-decoder model is a [`SeamlessM4Tv2TextToUnitModel`]. """ ) class SeamlessM4Tv2TextToUnitForConditionalGeneration(SeamlessM4Tv2PreTrainedModel, GenerationMixin): _keys_to_ignore_on_load_missing = [ "vocoder", "speech_encoder", "text_encoder", "text_decoder", ] _tied_weights_keys = ["lm_head.weight"] # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TTextToUnitForConditionalGeneration.__init__ with SeamlessM4T->SeamlessM4Tv2 def __init__( self, config: SeamlessM4Tv2Config, embed_tokens_decoder: Optional[nn.Embedding] = None, ): r""" embed_tokens_decoder (`nn.Embedding`, *optional*): input embedding of the decoder. """ # update config - used principality for bos_token_id etc. config = copy.deepcopy(config) for param, val in config.to_dict().items(): if param.startswith("t2u_"): config.__setattr__(param[4:], val) super().__init__(config) self.model = SeamlessM4Tv2TextToUnitModel(config, embed_tokens_decoder) self.lm_head = nn.Linear(config.hidden_size, config.t2u_vocab_size, bias=False) # Initialize weights and apply final processing self.post_init() # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TTextToUnitForConditionalGeneration.get_encoder def get_encoder(self): return self.model.encoder # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TTextToUnitForConditionalGeneration.get_decoder def get_decoder(self): return self.model.decoder # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TTextToUnitForConditionalGeneration.get_input_embeddings def get_input_embeddings(self): return self.model.decoder.embed_tokens # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TTextToUnitForConditionalGeneration.set_input_embeddings def set_input_embeddings(self, value): self.model.decoder.embed_tokens = value @auto_docstring def forward( self, input_ids: Optional[torch.LongTensor] = None, char_input_ids: Optional[torch.LongTensor] = None, char_count_per_id: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, **kwargs, ) -> Union[Seq2SeqLMOutput, tuple[torch.FloatTensor]]: r""" char_input_ids (`torch.LongTensor` of shape `(batch_size, char_sequence_length)`): Character indices. The correspondence between characters and indices can be found in `char_to_id`, a dictionary in the generation configuration. char_count_per_id (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Number of characters per input id. inputs_embeds (`torch.FloatTensor` of shape`(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.model( input_ids, char_input_ids=char_input_ids, char_count_per_id=char_count_per_id, attention_mask=attention_mask, encoder_outputs=encoder_outputs, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) lm_logits = self.lm_head(outputs[0]) masked_lm_loss = None if labels is not None: loss_fct = CrossEntropyLoss() labels = labels.to(lm_logits.device) masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) if not return_dict: output = (lm_logits,) + outputs[1:] return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output return SeamlessM4Tv2TextToUnitOutput( last_hidden_state=lm_logits, padding_mask=outputs.padding_mask, decoder_hidden_states=outputs.decoder_hidden_states, decoder_attentions=outputs.decoder_attentions, encoder_last_hidden_state=outputs.encoder_last_hidden_state, encoder_hidden_states=outputs.encoder_hidden_states, encoder_attentions=outputs.encoder_attentions, loss=masked_lm_loss, ) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TTextToUnitForConditionalGeneration._tie_weights def _tie_weights(self) -> None: if getattr(self.config, "tie_word_embeddings", True): output_embeddings = self.get_output_embeddings() if output_embeddings is not None: self._tie_or_clone_weights(output_embeddings, self.get_input_embeddings()) ############ VOCODER related code ################ # Copied from transformers.models.speecht5.modeling_speecht5.HifiGanResidualBlock class HifiGanResidualBlock(nn.Module): def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5), leaky_relu_slope=0.1): super().__init__() self.leaky_relu_slope = leaky_relu_slope self.convs1 = nn.ModuleList( [ nn.Conv1d( channels, channels, kernel_size, stride=1, dilation=dilation[i], padding=self.get_padding(kernel_size, dilation[i]), ) for i in range(len(dilation)) ] ) self.convs2 = nn.ModuleList( [ nn.Conv1d( channels, channels, kernel_size, stride=1, dilation=1, padding=self.get_padding(kernel_size, 1), ) for _ in range(len(dilation)) ] ) def get_padding(self, kernel_size, dilation=1): return (kernel_size * dilation - dilation) // 2 def apply_weight_norm(self): weight_norm = nn.utils.weight_norm if hasattr(nn.utils.parametrizations, "weight_norm"): weight_norm = nn.utils.parametrizations.weight_norm for layer in self.convs1: weight_norm(layer) for layer in self.convs2: weight_norm(layer) def remove_weight_norm(self): for layer in self.convs1: nn.utils.remove_weight_norm(layer) for layer in self.convs2: nn.utils.remove_weight_norm(layer) def forward(self, hidden_states): for conv1, conv2 in zip(self.convs1, self.convs2): residual = hidden_states hidden_states = nn.functional.leaky_relu(hidden_states, self.leaky_relu_slope) hidden_states = conv1(hidden_states) hidden_states = nn.functional.leaky_relu(hidden_states, self.leaky_relu_slope) hidden_states = conv2(hidden_states) hidden_states = hidden_states + residual return hidden_states class SeamlessM4Tv2VariancePredictor(nn.Module): def __init__(self, embed_dim, hidden_dim, kernel_size, var_pred_dropout): super().__init__() self.conv1 = nn.Conv1d( embed_dim, hidden_dim, kernel_size=kernel_size, padding="same", ) self.activation_fuction = nn.ReLU() self.ln1 = nn.LayerNorm(hidden_dim) self.dropout_module = nn.Dropout(p=var_pred_dropout) self.conv2 = nn.Conv1d( hidden_dim, hidden_dim, kernel_size=kernel_size, padding="same", ) self.ln2 = nn.LayerNorm(hidden_dim) self.proj = nn.Linear(hidden_dim, 1) def forward(self, hidden_states: Tensor, padding_mask: Optional[Tensor] = None) -> Tensor: # Input: B x T x C; Output: B x T if padding_mask is not None: hidden_states = hidden_states.masked_fill(~padding_mask.bool().unsqueeze(-1), 0.0) hidden_states = self.conv1(hidden_states.transpose(1, 2)) hidden_states = self.activation_fuction(hidden_states).transpose(1, 2) hidden_states = self.dropout_module(self.ln1(hidden_states)) if padding_mask is not None: hidden_states = hidden_states.masked_fill(~padding_mask.bool().unsqueeze(-1), 0.0) hidden_states = self.conv2(hidden_states.transpose(1, 2)) hidden_states = self.activation_fuction(hidden_states).transpose(1, 2) hidden_states = self.dropout_module(self.ln2(hidden_states)) return self.proj(hidden_states).squeeze(dim=2) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4THifiGan with SeamlessM4T->SeamlessM4Tv2 class SeamlessM4Tv2HifiGan(nn.Module): def __init__(self, config: SeamlessM4Tv2Config): super().__init__() model_in_dim = config.unit_embed_dim + config.lang_embed_dim + config.spkr_embed_dim self.leaky_relu_slope = config.leaky_relu_slope self.num_kernels = len(config.resblock_kernel_sizes) self.num_upsamples = len(config.upsample_rates) self.conv_pre = nn.Conv1d( model_in_dim, config.upsample_initial_channel, kernel_size=7, stride=1, padding=3, ) self.upsampler = nn.ModuleList() for i, (upsample_rate, kernel_size) in enumerate(zip(config.upsample_rates, config.upsample_kernel_sizes)): self.upsampler.append( nn.ConvTranspose1d( config.upsample_initial_channel // (2**i), config.upsample_initial_channel // (2 ** (i + 1)), kernel_size=kernel_size, stride=upsample_rate, padding=(kernel_size - upsample_rate) // 2, ) ) self.resblocks = nn.ModuleList() for i in range(len(self.upsampler)): channels = config.upsample_initial_channel // (2 ** (i + 1)) for kernel_size, dilation in zip(config.resblock_kernel_sizes, config.resblock_dilation_sizes): self.resblocks.append(HifiGanResidualBlock(channels, kernel_size, dilation, config.leaky_relu_slope)) self.conv_post = nn.Conv1d(channels, 1, kernel_size=7, stride=1, padding=3) def forward(self, input_embeds: torch.FloatTensor) -> torch.FloatTensor: r""" Converts a log-mel spectrogram into a speech waveform. Passing a batch of log-mel spectrograms returns a batch of speech waveforms. Passing a single, un-batched log-mel spectrogram returns a single, un-batched speech waveform. Args: spectrogram (`torch.FloatTensor`): Tensor containing the log-mel spectrograms. Can be batched and of shape `(batch_size, sequence_length, model_in_dim)`, or un-batched and of shape `(sequence_length, model_in_dim)`. Note that `model_in_dim` is the sum of `config.unit_embed_dim`, `config.lang_embed_dim` and `config.spkr_embed_dim`. Returns: `torch.FloatTensor`: Tensor containing the speech waveform. If the input spectrogram is batched, will be of shape `(batch_size, num_frames,)`. If un-batched, will be of shape `(num_frames,)`. """ hidden_states = self.conv_pre(input_embeds) for i in range(self.num_upsamples): hidden_states = nn.functional.leaky_relu(hidden_states, self.leaky_relu_slope) hidden_states = self.upsampler[i](hidden_states) res_state = self.resblocks[i * self.num_kernels](hidden_states) for j in range(1, self.num_kernels): res_state += self.resblocks[i * self.num_kernels + j](hidden_states) hidden_states = res_state / self.num_kernels hidden_states = nn.functional.leaky_relu(hidden_states) hidden_states = self.conv_post(hidden_states) hidden_states = torch.tanh(hidden_states) # remove seq-len dim since this collapses to 1 waveform = hidden_states.squeeze(1) return waveform @auto_docstring( custom_intro=""" Code HiFi-GAN vocoder as described in this [repository](https://github.com/facebookresearch/speech-resynthesis). """ ) class SeamlessM4Tv2CodeHifiGan(PreTrainedModel): config: SeamlessM4Tv2Config main_input_name = "input_embeds" _no_split_modules = [] def __init__(self, config): super().__init__(config) self.pad_token_id = config.t2u_pad_token_id embed_dim = config.unit_embed_dim kernel_size = config.variance_predictor_kernel_size var_pred_dropout = config.var_pred_dropout self.dur_predictor = SeamlessM4Tv2VariancePredictor(embed_dim, embed_dim, kernel_size, var_pred_dropout) self.unit_embedding = nn.Embedding(config.unit_hifi_gan_vocab_size, config.unit_embed_dim) self.speaker_embedding = nn.Embedding(config.vocoder_num_spkrs, config.spkr_embed_dim) self.language_embedding = nn.Embedding(config.vocoder_num_langs, config.lang_embed_dim) self.hifi_gan = SeamlessM4Tv2HifiGan(config) # Initialize weights and apply final processing self.post_init() # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TCodeHifiGan._get_dur_output_lengths def _get_dur_output_lengths(self, input_ids, dur_out): """ Computes the output length after the duration layer. """ unit_lengths = (input_ids != self.pad_token_id).sum(1) # take care of edge cases where no padding or too many padding unit_lengths = torch.clamp(unit_lengths, 0, dur_out.shape[1] - 1) cumulative_dur_out = torch.cumsum(dur_out, dim=1) unit_lengths = cumulative_dur_out.gather(dim=1, index=unit_lengths.unsqueeze(1)).squeeze() return unit_lengths # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TCodeHifiGan._get_output_hifigan_lengths def _get_output_hifigan_lengths(self, input_lengths: Union[torch.LongTensor, int]): """ Computes the output length of the hifigan convolutional layers """ def _conv_out_length(input_length, kernel_size, stride, pad, dilation=1): # 1D convolutional layer output length formula taken # from https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html return ( torch.div(input_length + 2 * pad - dilation * (kernel_size - 1) - 1, stride, rounding_mode="floor") + 1 ) def _transpose_conv_out_length(input_length, kernel_size, stride, pad, dilation=1): return (input_length - 1) * stride - 2 * pad + dilation * (kernel_size - 1) + 1 # conv_pre input_lengths = _conv_out_length(input_lengths, 7, 1, 3) # upsampler for i, (upsample_rate, kernel_size) in enumerate( zip(self.config.upsample_rates, self.config.upsample_kernel_sizes) ): input_lengths = _transpose_conv_out_length( input_lengths, kernel_size, upsample_rate, (kernel_size - upsample_rate) // 2 ) # resblock for i in range(len(self.config.upsample_rates)): for kernel_size, dilation in zip(self.config.resblock_kernel_sizes, self.config.resblock_dilation_sizes): for dil in dilation: input_lengths = _conv_out_length( input_lengths, kernel_size, 1, (kernel_size - 1) * dil // 2, dilation=dil ) for dil in dilation: input_lengths = _conv_out_length(input_lengths, kernel_size, 1, (kernel_size - 1) // 2, dilation=1) # conv_post input_lengths = _conv_out_length(input_lengths, 7, 1, 3) return input_lengths # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TCodeHifiGan.forward with SeamlessM4T->SeamlessM4Tv2, spkr_id->speaker_id def forward( self, input_ids: torch.LongTensor, speaker_id: torch.Tensor, lang_id: torch.Tensor ) -> tuple[torch.Tensor]: """ Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Indices can be obtained using [`SeamlessM4Tv2TextToUnitForConditionalGeneration`]. [What are input IDs?](../glossary#input-ids) speaker_id (`int`, *optional*): The id of the speaker used for speech synthesis. Must be lower than `config.vocoder_num_spkrs`. tgt_lang (`str`, *optional*): The language id to use as target language for translation. """ hidden_states = self.unit_embedding(input_ids).transpose(1, 2) spkr = self.speaker_embedding(speaker_id).transpose(1, 2) lang = self.language_embedding(lang_id).transpose(1, 2) log_dur_pred = self.dur_predictor(hidden_states.transpose(1, 2)) dur_out = torch.clamp(torch.round(torch.expm1(log_dur_pred)).long(), min=1) # B x C x T if hidden_states.size(0) == 1: hidden_states = torch.repeat_interleave(hidden_states, dur_out.view(-1), dim=2) else: # if batched sample, need to interleave per sample, and pad -> loss of parallelism if hidden_states.shape[0] > 1 and self.training: logger.warning( """`self.training=True` and you use batching. You lose parallelism during the hifigan forward pass because the samples are interleaved.""" ) hidden_states = [ torch.repeat_interleave(hidden_state, duration, dim=-1).transpose(0, 1) for (hidden_state, duration) in zip(hidden_states, dur_out) ] hidden_states = nn.utils.rnn.pad_sequence(hidden_states, batch_first=True).transpose(1, 2) spkr = spkr.repeat(1, 1, hidden_states.shape[-1]) lang = lang.repeat(1, 1, hidden_states.shape[-1]) hidden_states = torch.cat([lang, hidden_states, spkr], dim=1) hidden_states = self.hifi_gan(hidden_states) unit_lengths = self._get_dur_output_lengths(input_ids, dur_out) lengths = self._get_output_hifigan_lengths(unit_lengths) return hidden_states, lengths # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TCodeHifiGan._init_weights def _init_weights(self, module: nn.Module): """Initialize the weights.""" std = self.config.initializer_range if isinstance(module, (nn.Linear, nn.Conv1d, nn.ConvTranspose1d)): module.weight.data.normal_(mean=0.0, std=std) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=std) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() elif isinstance(module, nn.LayerNorm): module.weight.data.fill_(1.0) module.bias.data.zero_() # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TCodeHifiGan.apply_weight_norm def apply_weight_norm(self): weight_norm = nn.utils.weight_norm if hasattr(nn.utils.parametrizations, "weight_norm"): weight_norm = nn.utils.parametrizations.weight_norm weight_norm(self.hifi_gan.conv_pre) for layer in self.hifi_gan.upsampler: weight_norm(layer) for layer in self.hifi_gan.resblocks: layer.apply_weight_norm() weight_norm(self.hifi_gan.conv_post) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TCodeHifiGan.remove_weight_norm def remove_weight_norm(self): nn.utils.remove_weight_norm(self.hifi_gan.conv_pre) for layer in self.hifi_gan.upsampler: nn.utils.remove_weight_norm(layer) for layer in self.hifi_gan.resblocks: layer.remove_weight_norm() nn.utils.remove_weight_norm(self.hifi_gan.conv_post) ############ WHOLE MODEL related code ################ @auto_docstring( custom_intro=""" The text-to-text SeamlessM4Tv2 Model transformer which can be used for T2TT. """ ) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForTextToText with SeamlessM4T->SeamlessM4Tv2,SeamlessM4Tv2Tokenizer->SeamlessM4TTokenizer, SeamlessM4Tv2Processor->SeamlessM4TProcessor, SEAMLESS_M4T->SEAMLESS_M4T_V2 class SeamlessM4Tv2ForTextToText(SeamlessM4Tv2PreTrainedModel, GenerationMixin): _keys_to_ignore_on_load_missing = ["speech_encoder", "t2u_model", "vocoder"] main_input_name = "input_ids" _tied_weights_keys = [ "lm_head.weight", "text_encoder.embed_tokens.weight", "text_decoder.embed_tokens.weight", ] def __init__(self, config: SeamlessM4Tv2Config): super().__init__(config) self.shared = nn.Embedding(config.vocab_size, config.hidden_size, config.pad_token_id) self.text_encoder = SeamlessM4Tv2Encoder(config, self.shared) self.text_decoder = SeamlessM4Tv2Decoder(config, self.shared) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) # Initialize weights and apply final processing self.post_init() def get_encoder(self): return self.text_encoder def get_decoder(self): return self.text_decoder def get_input_embeddings(self): return self.text_decoder.embed_tokens def set_input_embeddings(self, value): self.text_encoder.embed_tokens = value self.text_decoder.embed_tokens = value self.shared = value def _tie_weights(self): if self.config.tie_word_embeddings: self._tie_or_clone_weights(self.text_encoder.embed_tokens, self.shared) self._tie_or_clone_weights(self.text_decoder.embed_tokens, self.shared) self._tie_or_clone_weights(self.lm_head, self.shared) @auto_docstring(custom_args=SEAMLESS_M4T_V2_COMMON_CUSTOM_ARGS) def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[tuple[tuple[torch.FloatTensor]]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, **kwargs, ) -> Union[Seq2SeqLMOutput, tuple[torch.FloatTensor]]: r""" labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` """ if labels is not None: if use_cache: logger.warning("The `use_cache` argument is changed to `False` since `labels` is provided.") use_cache = False if decoder_input_ids is None and decoder_inputs_embeds is None: decoder_input_ids = shift_tokens_right( labels, self.config.pad_token_id, self.config.decoder_start_token_id ) output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict if encoder_outputs is None: encoder_outputs = self.text_encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): encoder_outputs = BaseModelOutput( last_hidden_state=encoder_outputs[0], hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, ) encoder_attention_mask = attention_mask # decoder outputs consists of (dec_features, past_key_values, dec_hidden, dec_attn) decoder_outputs = self.text_decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) lm_logits = self.lm_head(decoder_outputs[0]) masked_lm_loss = None if labels is not None: loss_fct = CrossEntropyLoss() labels = labels.to(lm_logits.device) masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) if not return_dict: outputs = decoder_outputs + encoder_outputs output = (lm_logits,) + outputs[1:] return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output return Seq2SeqLMOutput( loss=masked_lm_loss, logits=lm_logits, past_key_values=decoder_outputs.past_key_values, decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, cross_attentions=decoder_outputs.cross_attentions, encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, encoder_attentions=encoder_outputs.attentions, ) def generate( self, input_ids=None, tgt_lang=None, generation_config=None, logits_processor=None, stopping_criteria=None, prefix_allowed_tokens_fn=None, synced_gpus=False, **kwargs, ): """ Generates sequences of token ids. <Tip warning={true}> Most generation-controlling parameters are set in `generation_config` which, if not passed, will be set to the model's default generation configuration. You can override any `generation_config` by passing the corresponding parameters to generate(), e.g. `.generate(inputs, num_beams=4, do_sample=True)`. For an overview of generation strategies and code examples, check out the [following guide](./generation_strategies). </Tip> Parameters: input_ids (`torch.Tensor` of varying shape depending on the modality, *optional*): Indices of input sequence tokens in the vocabulary. Indices can be obtained using [`SeamlessM4TTokenizer`] or [`SeamlessM4TProcessor`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) tgt_lang (`str`, *optional*): The language to use as target language for translation. generation_config (`~generation.GenerationConfig`, *optional*): The generation configuration to be used as base parametrization for the generation call. `**kwargs` passed to generate matching the attributes of `generation_config` will override them. If `generation_config` is not provided, the default will be used, which had the following loading priority: 1) from the `generation_config.json` model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit [`~generation.GenerationConfig`]'s default values, whose documentation should be checked to parameterize generation. logits_processor (`LogitsProcessorList`, *optional*): Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. stopping_criteria (`StoppingCriteriaList`, *optional*): Custom stopping criteria that complement the default stopping criteria built from arguments and a generation config. If a stopping criteria is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. prefix_allowed_tokens_fn (`Callable[[int, torch.Tensor], list[int]]`, *optional*): If provided, this function constraints the beam search to allowed tokens only at each step. If not provided no constraint is applied. This function takes 2 arguments: the batch ID `batch_id` and `input_ids`. It has to return a list with the allowed tokens for the next generation step conditioned on the batch ID `batch_id` and the previously generated tokens `inputs_ids`. This argument is useful for constrained generation conditioned on the prefix, as described in [Autoregressive Entity Retrieval](https://huggingface.co/papers/2010.00904). synced_gpus (`bool`, *optional*, defaults to `False`): Whether to continue running the while loop until max_length (needed to avoid deadlocking with `FullyShardedDataParallel` and DeepSpeed ZeRO Stage 3). kwargs (`dict[str, Any]`, *optional*): Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be forwarded to the `forward` function of the model. Return: [`~utils.ModelOutput`] or `torch.LongTensor`: A [`~utils.ModelOutput`] (if `return_dict_in_generate=True` or when `config.return_dict_in_generate=True`) or a `torch.FloatTensor`. The possible [`~utils.ModelOutput`] types are: - [`~generation.GenerateEncoderDecoderOutput`], - [`~generation.GenerateBeamEncoderDecoderOutput`] """ # prepare text_decoder_input_ids text_decoder_input_ids = kwargs.pop("decoder_input_ids", None) # overwrite text_decoder_input_ids if tgt_lang is passed. The latter gets priority over decoder_input_ids. if tgt_lang is not None: batch_size = len(input_ids) if input_ids is not None else len(kwargs.get("inputs_embeds")) if hasattr(self.generation_config, "text_decoder_lang_to_code_id"): # also accept __xxx__ tgt_lang = tgt_lang.replace("__", "") if tgt_lang not in self.generation_config.text_decoder_lang_to_code_id: raise ValueError( f"""`tgt_lang={tgt_lang}` is not supported by this model. Please specify a `tgt_lang` in {", ".join(self.generation_config.text_decoder_lang_to_code_id.keys())}""" ) # tgt_lang gets priority over decoder input ids text_tgt_lang_id = self.generation_config.text_decoder_lang_to_code_id.get(tgt_lang) text_decoder_input_ids = torch.tensor([[text_tgt_lang_id]] * batch_size, device=self.device) else: raise ValueError( """This model generation config doesn't have a `text_decoder_lang_to_code_id` key which maps the target language to the right token id. Make sure to load the right generation config.""" ) else: # only a warning, otherwise errors appear in the tests logger.warning( """You must either specify a `tgt_lang` or pass a correct `text_decoder_input_ids` to get a correct generation, otherwise the generation will probably make no sense.""" ) return super().generate( input_ids, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, decoder_input_ids=text_decoder_input_ids, **kwargs, ) @auto_docstring( custom_intro=""" The speech-to-text SeamlessM4Tv2 Model transformer which can be used for S2TT. """ ) class SeamlessM4Tv2ForSpeechToText(SeamlessM4Tv2PreTrainedModel, GenerationMixin): _keys_to_ignore_on_load_missing = ["text_encoder", "t2u_model", "vocoder"] main_input_name = "input_features" _tied_weights_keys = [ "lm_head.weight", "text_decoder.embed_tokens.weight", ] # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToText.__init__ with SeamlessM4T->SeamlessM4Tv2 def __init__(self, config: SeamlessM4Tv2Config): super().__init__(config) self.shared = nn.Embedding(config.vocab_size, config.hidden_size, config.pad_token_id) self.speech_encoder = SeamlessM4Tv2SpeechEncoder(config) self.text_decoder = SeamlessM4Tv2Decoder(config, self.shared) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) # Initialize weights and apply final processing self.post_init() # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToText.get_encoder def get_encoder(self): return self.speech_encoder # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToText.get_decoder def get_decoder(self): return self.text_decoder # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToText.get_input_embeddings def get_input_embeddings(self): return self.text_decoder.embed_tokens # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToText.set_input_embeddings def set_input_embeddings(self, value): self.text_decoder.embed_tokens = value # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToText._tie_weights def _tie_weights(self): if self.config.tie_word_embeddings: self._tie_or_clone_weights(self.text_decoder.embed_tokens, self.shared) self._tie_or_clone_weights(self.lm_head, self.shared) @auto_docstring(custom_args=SEAMLESS_M4T_V2_COMMON_CUSTOM_ARGS) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToText.forward def forward( self, input_features: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[tuple[tuple[torch.FloatTensor]]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, **kwargs, ) -> Union[Seq2SeqLMOutput, tuple[torch.FloatTensor]]: r""" labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` """ if labels is not None: if use_cache: logger.warning("The `use_cache` argument is changed to `False` since `labels` is provided.") use_cache = False if decoder_input_ids is None and decoder_inputs_embeds is None: decoder_input_ids = shift_tokens_right( labels, self.config.pad_token_id, self.config.decoder_start_token_id ) output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict if encoder_outputs is None: encoder_outputs = self.speech_encoder( input_features=input_features, attention_mask=attention_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): encoder_outputs = BaseModelOutput( last_hidden_state=encoder_outputs[0], hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, ) encoder_attention_mask = attention_mask if attention_mask is not None: sub_sampled_lengths = self._compute_sub_sample_lengths_from_attention_mask(attention_mask).to( encoder_outputs[0].device ) encoder_attention_mask = _compute_new_attention_mask( hidden_states=encoder_outputs[0], seq_lens=sub_sampled_lengths ) # decoder outputs consists of (dec_features, past_key_values, dec_hidden, dec_attn) decoder_outputs = self.text_decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) lm_logits = self.lm_head(decoder_outputs[0]) masked_lm_loss = None if labels is not None: loss_fct = CrossEntropyLoss() labels = labels.to(lm_logits.device) masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) if not return_dict: outputs = decoder_outputs + encoder_outputs output = (lm_logits,) + outputs[1:] return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output return Seq2SeqLMOutput( loss=masked_lm_loss, logits=lm_logits, past_key_values=decoder_outputs.past_key_values, decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, cross_attentions=decoder_outputs.cross_attentions, encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, encoder_attentions=encoder_outputs.attentions, ) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToText.generate def generate( self, input_features=None, tgt_lang=None, generation_config=None, logits_processor=None, stopping_criteria=None, prefix_allowed_tokens_fn=None, synced_gpus=False, **kwargs, ): """ Generates sequences of token ids. <Tip warning={true}> Most generation-controlling parameters are set in `generation_config` which, if not passed, will be set to the model's default generation configuration. You can override any `generation_config` by passing the corresponding parameters to generate(), e.g. `.generate(inputs, num_beams=4, do_sample=True)`. For an overview of generation strategies and code examples, check out the [following guide](./generation_strategies). </Tip> Parameters: input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length, num_banks)`): Input audio features. This should be returned by the [`SeamlessM4TFeatureExtractor`] class or the [`SeamlessM4TProcessor`] class. See [`SeamlessM4TFeatureExtractor.__call__`] for details. tgt_lang (`str`, *optional*): The language to use as target language for translation. generation_config (`~generation.GenerationConfig`, *optional*): The generation configuration to be used as base parametrization for the generation call. `**kwargs` passed to generate matching the attributes of `generation_config` will override them. If `generation_config` is not provided, the default will be used, which had the following loading priority: 1) from the `generation_config.json` model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit [`~generation.GenerationConfig`]'s default values, whose documentation should be checked to parameterize generation. logits_processor (`LogitsProcessorList`, *optional*): Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. stopping_criteria (`StoppingCriteriaList`, *optional*): Custom stopping criteria that complement the default stopping criteria built from arguments and a generation config. If a stopping criteria is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. prefix_allowed_tokens_fn (`Callable[[int, torch.Tensor], list[int]]`, *optional*): If provided, this function constraints the beam search to allowed tokens only at each step. If not provided no constraint is applied. This function takes 2 arguments: the batch ID `batch_id` and `input_ids`. It has to return a list with the allowed tokens for the next generation step conditioned on the batch ID `batch_id` and the previously generated tokens `inputs_ids`. This argument is useful for constrained generation conditioned on the prefix, as described in [Autoregressive Entity Retrieval](https://huggingface.co/papers/2010.00904). synced_gpus (`bool`, *optional*, defaults to `False`): Whether to continue running the while loop until max_length (needed to avoid deadlocking with `FullyShardedDataParallel` and DeepSpeed ZeRO Stage 3). kwargs (`dict[str, Any]`, *optional*): Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be forwarded to the `forward` function of the model. Return: [`~utils.ModelOutput`] or `torch.LongTensor`: A [`~utils.ModelOutput`] (if `return_dict_in_generate=True` or when `config.return_dict_in_generate=True`) or a `torch.FloatTensor`. The possible [`~utils.ModelOutput`] types are: - [`~generation.GenerateEncoderDecoderOutput`], - [`~generation.GenerateBeamEncoderDecoderOutput`] """ text_decoder_input_ids = kwargs.pop("decoder_input_ids", None) # overwrite text_decoder_input_ids if tgt_lang is passed. The latter gets priority over decoder_input_ids. input_features = input_features if input_features is not None else kwargs.pop("inputs") if tgt_lang is not None: inputs = kwargs.get("input_embeds") if input_features is None else input_features inputs = ( inputs if inputs is not None else kwargs.get("encoder_outputs", {"last_hidden_state": None})["last_hidden_state"] ) batch_size = len(inputs) if hasattr(self.generation_config, "text_decoder_lang_to_code_id"): # also accept __xxx__ tgt_lang = tgt_lang.replace("__", "") if tgt_lang not in self.generation_config.text_decoder_lang_to_code_id: raise ValueError( f"""`tgt_lang={tgt_lang}` is not supported by this model. Please specify a `tgt_lang` in {", ".join(self.generation_config.text_decoder_lang_to_code_id.keys())}""" ) # tgt_lang gets priority over decoder input ids text_tgt_lang_id = self.generation_config.text_decoder_lang_to_code_id.get(tgt_lang) text_decoder_input_ids = torch.tensor([[text_tgt_lang_id]] * batch_size, device=self.device) else: raise ValueError( """This model generation config doesn't have a `text_decoder_lang_to_code_id` key which maps the target language to the right token id. Make sure to load the right generation config.""" ) else: # only a warning, otherwise errors appear in the tests logger.warning( """You must either specify a `tgt_lang` or pass a correct `text_decoder_input_ids` to get a correct generation, otherwise the generation will probably make no sense.""" ) return super().generate( input_features, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, decoder_input_ids=text_decoder_input_ids, **kwargs, ) @auto_docstring( custom_intro=""" The text-to-speech SeamlessM4Tv2 Model transformer which can be used for T2ST. """ ) class SeamlessM4Tv2ForTextToSpeech(SeamlessM4Tv2PreTrainedModel, GenerationMixin): _keys_to_ignore_on_load_missing = ["speech_encoder"] main_input_name = "input_ids" _tied_weights_keys = [ "lm_head.weight", "text_encoder.embed_tokens.weight", "text_decoder.embed_tokens.weight", ] # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForTextToSpeech.__init__ with SeamlessM4T->SeamlessM4Tv2 def __init__(self, config: SeamlessM4Tv2Config): super().__init__(config) self.shared = nn.Embedding(config.vocab_size, config.hidden_size, config.pad_token_id) self.text_encoder = SeamlessM4Tv2Encoder(config, self.shared) self.text_decoder = SeamlessM4Tv2Decoder(config, self.shared) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) # Initialize weights and apply final processing self.post_init() self.t2u_model = SeamlessM4Tv2TextToUnitForConditionalGeneration(config) self.vocoder = SeamlessM4Tv2CodeHifiGan(config) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForTextToSpeech.get_encoder def get_encoder(self): return self.text_encoder # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForTextToSpeech.get_decoder def get_decoder(self): return self.text_decoder # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForTextToSpeech.get_input_embeddings def get_input_embeddings(self): return self.text_decoder.embed_tokens # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForTextToSpeech.set_input_embeddings def set_input_embeddings(self, value): self.text_encoder.embed_tokens = value self.text_decoder.embed_tokens = value self.shared = value # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForTextToSpeech._tie_weights def _tie_weights(self): if self.config.tie_word_embeddings: self._tie_or_clone_weights(self.text_encoder.embed_tokens, self.shared) self._tie_or_clone_weights(self.text_decoder.embed_tokens, self.shared) self._tie_or_clone_weights(self.lm_head, self.shared) @auto_docstring(custom_args=SEAMLESS_M4T_V2_COMMON_CUSTOM_ARGS) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForTextToSpeech.forward with SeamlessM4T->SeamlessM4Tv2 def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[tuple[tuple[torch.FloatTensor]]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, cache_position: Optional[torch.Tensor] = None, ) -> Union[Seq2SeqLMOutput, tuple[torch.FloatTensor]]: r""" labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` """ if labels is not None: if use_cache: logger.warning("The `use_cache` argument is changed to `False` since `labels` is provided.") use_cache = False if decoder_input_ids is None and decoder_inputs_embeds is None: decoder_input_ids = shift_tokens_right( labels, self.config.pad_token_id, self.config.decoder_start_token_id ) output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict if encoder_outputs is None: # if encoder_outputs is not None, it's probably used within a .generate method so no need to warn logger.warning( "This is the same forward method as `SeamlessM4Tv2ForTextToText`." "It doesn't use the text-to-unit model `SeamlessM4Tv2TextToUnitForConditionalGeneration`." "If you want to generate speech, use the `.generate` method." ) encoder_outputs = self.text_encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): encoder_outputs = BaseModelOutput( last_hidden_state=encoder_outputs[0], hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, ) encoder_attention_mask = attention_mask # decoder outputs consists of (dec_features, past_key_values, dec_hidden, dec_attn) decoder_outputs = self.text_decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, cache_position=cache_position, ) lm_logits = self.lm_head(decoder_outputs[0]) masked_lm_loss = None if labels is not None: loss_fct = CrossEntropyLoss() labels = labels.to(lm_logits.device) masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) if not return_dict: outputs = decoder_outputs + encoder_outputs output = (lm_logits,) + outputs[1:] return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output return Seq2SeqLMOutput( loss=masked_lm_loss, logits=lm_logits, past_key_values=decoder_outputs.past_key_values, decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, cross_attentions=decoder_outputs.cross_attentions, encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, encoder_attentions=encoder_outputs.attentions, ) @torch.no_grad() def generate( self, input_ids: Optional[torch.Tensor] = None, return_intermediate_token_ids: Optional[bool] = None, tgt_lang: Optional[str] = None, speaker_id: Optional[int] = 0, **kwargs, ) -> Union[torch.Tensor, SeamlessM4Tv2GenerationOutput]: """ Generates translated audio waveforms. <Tip> This method successively calls the `.generate` function of two different sub-models. You can specify keyword arguments at two different levels: general arguments that will be passed to both models, or prefixed arguments that will be passed to one of them. For example, calling `.generate(input_ids, num_beams=4, speech_do_sample=True)` will successively perform beam-search decoding on the text model, and multinomial beam-search sampling on the speech model. For an overview of generation strategies and code examples, check out the [following guide](./generation_strategies). </Tip> Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Indices can be obtained using [`SeamlessM4TTokenizer`] or [`SeamlessM4TProcessor`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) return_intermediate_token_ids (`bool`, *optional*): If `True`, also returns the intermediate generated text and unit tokens. Set to `True` if you also want to get translated text alongside the audio. tgt_lang (`str`, *optional*): The language to use as target language for translation. speaker_id (`int`, *optional*, defaults to 0): The id of the speaker used for speech synthesis. Must be lower than `config.vocoder_num_spkrs`. kwargs (*optional*): Remaining dictionary of keyword arguments that will be passed to [`GenerationMixin.generate`]. Keyword arguments are of two types: - Without a prefix, they will be entered as `**kwargs` for the `generate` method of each sub-model, except for `decoder_input_ids` which will only be passed through the text components. - With a *text_* or *speech_* prefix, they will be input for the `generate` method of the text model and speech model respectively. It has the priority over the keywords without a prefix. This means you can, for example, specify a generation strategy for one generation but not for the other. Returns: `Union[SeamlessM4Tv2GenerationOutput, tuple[Tensor]]`: - If `return_intermediate_token_ids`, returns [`SeamlessM4Tv2GenerationOutput`]. - If not `return_intermediate_token_ids`, returns a tuple composed of waveforms of shape `(batch_size, sequence_length)` and `waveform_lengths` which gives the length of each sample. """ batch_size = len(input_ids) if input_ids is not None else len(kwargs.get("inputs_embeds")) if tgt_lang is None: raise ValueError("You must specify a `tgt_lang` to generate translated speech.") else: # also accept __xxx__ tgt_lang = tgt_lang.replace("__", "") for key in ["text_decoder_lang_to_code_id", "t2u_lang_code_to_id", "vocoder_lang_code_to_id"]: lang_code_to_id = getattr(self.generation_config, key, None) if lang_code_to_id is None: raise ValueError( f"""This model generation config doesn't have a `{key}` key which maps the target language to the right token id. Make sure to load the right generation config.""" ) elif tgt_lang not in lang_code_to_id: raise ValueError( f"""`tgt_lang={tgt_lang}` is not supported by this model. Please specify a `tgt_lang` in {",".join(lang_code_to_id.keys())}. Note that SeamlessM4Tv2 supports more languages for text translation than for speech synthesis.""" ) kwargs_text, kwargs_speech = format_speech_generation_kwargs(kwargs) kwargs_text["output_hidden_states"] = True kwargs_text["return_dict_in_generate"] = True kwargs_text["output_scores"] = True text_decoder_input_ids = kwargs_text.get("decoder_input_ids") # overwrite text_decoder_input_ids if tgt_lang is passed. The latter gets priority over decoder_input_ids. text_tgt_lang_id = self.generation_config.text_decoder_lang_to_code_id.get(tgt_lang) text_decoder_input_ids = torch.tensor([[text_tgt_lang_id]] * batch_size, device=self.device) kwargs_text["decoder_input_ids"] = text_decoder_input_ids # first generation text_generation_output = super().generate(input_ids, **kwargs_text) sequences = text_generation_output.sequences # prepare second generation num_return_sequences = len(sequences) // batch_size attention_mask = kwargs_speech.get("attention_mask", kwargs_text.get("attention_mask", None)) if attention_mask is not None: # repeat attention mask alongside batch dimension attention_mask = torch.repeat_interleave(attention_mask, num_return_sequences, dim=0) encoder_hidden_states = text_generation_output.encoder_hidden_states[-1] # repeat attention mask alongside batch dimension encoder_hidden_states = torch.repeat_interleave(encoder_hidden_states, num_return_sequences, dim=0) # get decoder last hidden state - must do a pass through the text decoder t2u_input_embeds = self.text_decoder( input_ids=sequences[:, :-1], # Manually trim the final EOS token encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=attention_mask, ).last_hidden_state pad_token_id = self.generation_config.pad_token_id # Compute new attention mask seq_lens = (sequences[:, :-1] != pad_token_id).int().sum(1) t2u_model_attention_mask = _compute_new_attention_mask(t2u_input_embeds, seq_lens) kwargs_speech["attention_mask"] = t2u_model_attention_mask # REMOVE EOS and lang_id t2u_input_ids = sequences[:, 2:-1] # replace every other EOS t2u_input_ids = torch.masked_fill( t2u_input_ids, t2u_input_ids == self.generation_config.eos_token_id, pad_token_id ) # compute t2u_char_input_ids t2u_subwords = self._indices_to_subwords(t2u_input_ids) t2u_char_count_per_id = self._count_character_length_in_subword( t2u_input_ids, t2u_subwords, pad_token_id=pad_token_id ) # Add pads for lang, EOS tokens as per NLLB "source" tokenizer mode. pad_zero = t2u_char_count_per_id.new_zeros((t2u_char_count_per_id.shape[0], 1)) t2u_char_count_per_id = torch.cat([pad_zero, t2u_char_count_per_id, pad_zero], dim=1) t2u_char_input_ids = self._get_char_input_ids( t2u_input_ids, t2u_subwords, t2u_char_count_per_id, pad_token_id=pad_token_id ) # second pass t2u_output = self.t2u_model( inputs_embeds=t2u_input_embeds, char_input_ids=t2u_char_input_ids, char_count_per_id=t2u_char_count_per_id, **kwargs_speech, ) t2u_logits = t2u_output[0] padding_mask = t2u_output[1].bool() # The text-to-unit model is non auto-regressive. We keep the ability to use sampling with temperature temperature = kwargs_speech.get("temperature", None) if (temperature is None or temperature == 1.0) or not kwargs_speech.get("do_sample", False): unit_ids = t2u_logits.argmax(dim=-1) else: t2u_logits = t2u_logits / temperature # apply softmax probs = nn.functional.softmax(t2u_logits, dim=-1) # reshape to 2D: (batch_size, seq_len, t2u_vocab_size) -> (batch_size*seq_len, t2u_vocab_size) probs = probs.reshape((-1, probs.shape[2])) # multinomial then reshape : (batch_size*seq_len)-> (batch_size,seq_len) unit_ids = torch.multinomial(probs, num_samples=1).view(t2u_logits.shape[0], -1) output_unit_ids = unit_ids.detach().clone() replace_mask = (unit_ids == self.config.t2u_eos_token_id) | (~padding_mask) # replace eos per pad unit_ids = unit_ids.masked_fill(replace_mask, self.config.t2u_pad_token_id) # offset of control symbols unit_ids = torch.where( unit_ids == self.config.t2u_pad_token_id, unit_ids, unit_ids - self.config.vocoder_offset ) vocoder_tgt_lang_id = self.generation_config.vocoder_lang_code_to_id.get(tgt_lang) vocoder_tgt_lang_id = torch.tensor([[vocoder_tgt_lang_id]] * len(unit_ids), device=self.device) speaker_id = torch.tensor([[speaker_id]] * len(unit_ids), device=self.device) waveform, waveform_lengths = self.vocoder( input_ids=unit_ids, speaker_id=speaker_id, lang_id=vocoder_tgt_lang_id ) if return_intermediate_token_ids: return SeamlessM4Tv2GenerationOutput( waveform=waveform, waveform_lengths=waveform_lengths, sequences=sequences, unit_sequences=output_unit_ids, ) return waveform, waveform_lengths @auto_docstring( custom_intro=""" The speech-to-speech SeamlessM4Tv2 Model transformer which can be used for S2ST. """ ) class SeamlessM4Tv2ForSpeechToSpeech(SeamlessM4Tv2PreTrainedModel, GenerationMixin): _keys_to_ignore_on_load_missing = ["text_encoder"] main_input_name = "input_features" _tied_weights_keys = [ "lm_head.weight", "text_decoder.embed_tokens.weight", ] # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToSpeech.__init__ with SeamlessM4T->SeamlessM4Tv2 def __init__(self, config): super().__init__(config) self.shared = nn.Embedding(config.vocab_size, config.hidden_size, config.pad_token_id) self.speech_encoder = SeamlessM4Tv2SpeechEncoder(config) self.text_decoder = SeamlessM4Tv2Decoder(config, self.shared) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) # Initialize weights and apply final processing self.post_init() self.t2u_model = SeamlessM4Tv2TextToUnitForConditionalGeneration(config) self.vocoder = SeamlessM4Tv2CodeHifiGan(config) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToSpeech.get_encoder def get_encoder(self): return self.speech_encoder # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToSpeech.get_decoder def get_decoder(self): return self.text_decoder # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToSpeech.get_input_embeddings def get_input_embeddings(self): return self.text_decoder.embed_tokens # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToSpeech.set_input_embeddings def set_input_embeddings(self, value): self.text_decoder.embed_tokens = value # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToSpeech._tie_weights def _tie_weights(self): if self.config.tie_word_embeddings: self._tie_or_clone_weights(self.text_decoder.embed_tokens, self.shared) self._tie_or_clone_weights(self.lm_head, self.shared) @auto_docstring(custom_args=SEAMLESS_M4T_V2_COMMON_CUSTOM_ARGS) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TForSpeechToSpeech.forward with SeamlessM4T->SeamlessM4Tv2 def forward( self, input_features: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[tuple[tuple[torch.FloatTensor]]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, **kwargs, ) -> Union[Seq2SeqLMOutput, tuple[torch.FloatTensor]]: r""" labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` """ if labels is not None: if use_cache: logger.warning("The `use_cache` argument is changed to `False` since `labels` is provided.") use_cache = False if decoder_input_ids is None and decoder_inputs_embeds is None: decoder_input_ids = shift_tokens_right( labels, self.config.pad_token_id, self.config.decoder_start_token_id ) output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict if encoder_outputs is None: # if encoder_outputs is not None, it's probably used within a .generate method so no need to warn logger.warning( "This is the same forward method as `SeamlessM4Tv2ForSpeechToText`. It doesn't use `self.t2u_model`." "If you want to generate speech, use the `generate` method." ) encoder_outputs = self.speech_encoder( input_features=input_features, attention_mask=attention_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): encoder_outputs = BaseModelOutput( last_hidden_state=encoder_outputs[0], hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, ) encoder_attention_mask = attention_mask if attention_mask is not None: sub_sampled_lengths = self._compute_sub_sample_lengths_from_attention_mask(attention_mask).to( encoder_outputs[0].device ) encoder_attention_mask = _compute_new_attention_mask( hidden_states=encoder_outputs[0], seq_lens=sub_sampled_lengths ) # decoder outputs consists of (dec_features, past_key_values, dec_hidden, dec_attn) decoder_outputs = self.text_decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) lm_logits = self.lm_head(decoder_outputs[0]) masked_lm_loss = None if labels is not None: loss_fct = CrossEntropyLoss() labels = labels.to(lm_logits.device) masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) if not return_dict: outputs = decoder_outputs + encoder_outputs output = (lm_logits,) + outputs[1:] return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output return Seq2SeqLMOutput( loss=masked_lm_loss, logits=lm_logits, past_key_values=decoder_outputs.past_key_values, decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, cross_attentions=decoder_outputs.cross_attentions, encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, encoder_attentions=encoder_outputs.attentions, ) @torch.no_grad() def generate( self, input_features: Optional[torch.Tensor] = None, return_intermediate_token_ids: Optional[bool] = None, tgt_lang: Optional[str] = None, speaker_id: Optional[int] = 0, **kwargs, ) -> Union[torch.Tensor, SeamlessM4Tv2GenerationOutput]: """ Generates translated audio waveforms. <Tip> This method successively calls the `.generate` function of two different sub-models. You can specify keyword arguments at two different levels: general arguments that will be passed to both models, or prefixed arguments that will be passed to one of them. For example, calling `.generate(input_features, num_beams=4, speech_do_sample=True)` will successively perform beam-search decoding on the text model, and multinomial beam-search sampling on the speech model. For an overview of generation strategies and code examples, check out the [following guide](./generation_strategies). </Tip> Args: input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length, num_banks)`): Input audio features. This should be returned by the [`SeamlessM4TFeatureExtractor`] class or the [`SeamlessM4TProcessor`] class. See [`SeamlessM4TFeatureExtractor.__call__`] for details. return_intermediate_token_ids (`bool`, *optional*): If `True`, also returns the intermediate generated text and unit tokens. Set to `True` if you also want to get translated text alongside the audio. tgt_lang (`str`, *optional*): The language to use as target language for translation. speaker_id (`int`, *optional*, defaults to 0): The id of the speaker used for speech synthesis. Must be lower than `config.vocoder_num_spkrs`. kwargs (*optional*): Remaining dictionary of keyword arguments that will be passed to [`GenerationMixin.generate`]. Keyword arguments are of two types: - Without a prefix, they will be entered as `**kwargs` for the `generate` method of each sub-model, except for `decoder_input_ids` which will only be passed through the text components. - With a *text_* or *speech_* prefix, they will be input for the `generate` method of the text model and speech model respectively. It has the priority over the keywords without a prefix. This means you can, for example, specify a generation strategy for one generation but not for the other. Returns: `Union[SeamlessM4Tv2GenerationOutput, tuple[Tensor]]`: - If `return_intermediate_token_ids`, returns [`SeamlessM4Tv2GenerationOutput`]. - If not `return_intermediate_token_ids`, returns a tuple composed of waveforms of shape `(batch_size, sequence_length)` and `waveform_lengths` which gives the length of each sample. """ batch_size = len(input_features) if input_features is not None else len(kwargs.get("inputs_embeds")) if tgt_lang is None: raise ValueError("You must specify a `tgt_lang` to generate translated speech.") else: # also accept __xxx__ tgt_lang = tgt_lang.replace("__", "") for key in ["text_decoder_lang_to_code_id", "t2u_lang_code_to_id", "vocoder_lang_code_to_id"]: lang_code_to_id = getattr(self.generation_config, key, None) if lang_code_to_id is None: raise ValueError( f"""This model generation config doesn't have a `{key}` key which maps the target language to the right token id. Make sure to load the right generation config.""" ) elif tgt_lang not in lang_code_to_id: raise ValueError( f"""`tgt_lang={tgt_lang}` is not supported by this model. Please specify a `tgt_lang` in {",".join(lang_code_to_id.keys())}. Note that SeamlessM4Tv2 supports more languages for text translation than for speech synthesis.""" ) kwargs_text, kwargs_speech = format_speech_generation_kwargs(kwargs) kwargs_text["output_hidden_states"] = True kwargs_text["return_dict_in_generate"] = True kwargs_text["output_scores"] = True text_decoder_input_ids = kwargs_text.get("decoder_input_ids") # overwrite text_decoder_input_ids if tgt_lang is passed. The latter gets priority over decoder_input_ids. text_tgt_lang_id = self.generation_config.text_decoder_lang_to_code_id.get(tgt_lang) text_decoder_input_ids = torch.tensor([[text_tgt_lang_id]] * batch_size, device=self.device) kwargs_text["decoder_input_ids"] = text_decoder_input_ids # first generation text_generation_output = super().generate(input_features, **kwargs_text) sequences = text_generation_output.sequences # prepare second generation num_return_sequences = len(sequences) // batch_size attention_mask = kwargs_speech.get("attention_mask", kwargs_text.get("attention_mask", None)) # get last_hidden_state from encoder encoder_hidden_states = self.speech_encoder(input_features=input_features, attention_mask=attention_mask)[0] # input modality = speech so new attention mask for the decoder if attention_mask is not None: sub_sampled_lengths = self._compute_sub_sample_lengths_from_attention_mask(attention_mask).to( encoder_hidden_states.device ) attention_mask = _compute_new_attention_mask( hidden_states=encoder_hidden_states, seq_lens=sub_sampled_lengths ) # repeat attention mask alongside batch dimension attention_mask = torch.repeat_interleave(attention_mask, num_return_sequences, dim=0) # repeat attention mask alongside batch dimension encoder_hidden_states = torch.repeat_interleave(encoder_hidden_states, num_return_sequences, dim=0) # get decoder last hidden state - must do a pass through the text decoder t2u_input_embeds = self.text_decoder( input_ids=sequences[:, :-1], # Manually trim the final EOS token encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=attention_mask, ).last_hidden_state pad_token_id = self.generation_config.pad_token_id # Compute new attention mask seq_lens = (sequences[:, :-1] != pad_token_id).int().sum(1) t2u_model_attention_mask = _compute_new_attention_mask(t2u_input_embeds, seq_lens) kwargs_speech["attention_mask"] = t2u_model_attention_mask # REMOVE EOS and lang_id t2u_input_ids = sequences[:, 2:-1] # replace every other EOS t2u_input_ids = torch.masked_fill( t2u_input_ids, t2u_input_ids == self.generation_config.eos_token_id, pad_token_id ) # compute t2u_char_input_ids t2u_subwords = self._indices_to_subwords(t2u_input_ids) t2u_char_count_per_id = self._count_character_length_in_subword( t2u_input_ids, t2u_subwords, pad_token_id=pad_token_id ) # Add pads for lang, EOS tokens as per NLLB "source" tokenizer mode. pad_zero = t2u_char_count_per_id.new_zeros((t2u_char_count_per_id.shape[0], 1)) t2u_char_count_per_id = torch.cat([pad_zero, t2u_char_count_per_id, pad_zero], dim=1) t2u_char_input_ids = self._get_char_input_ids( t2u_input_ids, t2u_subwords, t2u_char_count_per_id, pad_token_id=pad_token_id ) # second pass t2u_output = self.t2u_model( inputs_embeds=t2u_input_embeds, char_input_ids=t2u_char_input_ids, char_count_per_id=t2u_char_count_per_id, **kwargs_speech, ) t2u_logits = t2u_output[0] padding_mask = t2u_output[1].bool() # The text-to-unit model is non auto-regressive. We keep the ability to use sampling with temperature temperature = kwargs_speech.get("temperature", None) if (temperature is None or temperature == 1.0) or not kwargs_speech.get("do_sample", False): unit_ids = t2u_logits.argmax(dim=-1) else: t2u_logits = t2u_logits / temperature # apply softmax probs = nn.functional.softmax(t2u_logits, dim=-1) # reshape to 2D: (batch_size, seq_len, t2u_vocab_size) -> (batch_size*seq_len, t2u_vocab_size) probs = probs.reshape((-1, probs.shape[2])) # multinomial then reshape : (batch_size*seq_len)-> (batch_size,seq_len) unit_ids = torch.multinomial(probs, num_samples=1).view(t2u_logits.shape[0], -1) output_unit_ids = unit_ids.detach().clone() replace_mask = (unit_ids == self.config.t2u_eos_token_id) | (~padding_mask) # replace eos per pad unit_ids = unit_ids.masked_fill(replace_mask, self.config.t2u_pad_token_id) # offset of control symbols unit_ids = torch.where( unit_ids == self.config.t2u_pad_token_id, unit_ids, unit_ids - self.config.vocoder_offset ) vocoder_tgt_lang_id = self.generation_config.vocoder_lang_code_to_id.get(tgt_lang) vocoder_tgt_lang_id = torch.tensor([[vocoder_tgt_lang_id]] * len(unit_ids), device=self.device) speaker_id = torch.tensor([[speaker_id]] * len(unit_ids), device=self.device) waveform, waveform_lengths = self.vocoder( input_ids=unit_ids, speaker_id=speaker_id, lang_id=vocoder_tgt_lang_id ) if return_intermediate_token_ids: return SeamlessM4Tv2GenerationOutput( waveform=waveform, waveform_lengths=waveform_lengths, sequences=sequences, unit_sequences=output_unit_ids, ) return waveform, waveform_lengths @auto_docstring( custom_intro=""" The original SeamlessM4Tv2 Model transformer which can be used for every tasks available (S2ST, S2TT, T2TT, T2ST). """ ) class SeamlessM4Tv2Model(SeamlessM4Tv2PreTrainedModel, GenerationMixin): _tied_weights_keys = [ "lm_head.weight", "text_encoder.embed_tokens.weight", "text_decoder.embed_tokens.weight", ] # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TModel.__init__ with SeamlessM4T->SeamlessM4Tv2 def __init__(self, config, current_modality="text"): r""" current_modality (`str`, *optional*, defaults to `"text"`): Default modality. Used to initialize the model. """ super().__init__(config) self.shared = nn.Embedding(config.vocab_size, config.hidden_size, config.pad_token_id) self.text_encoder = SeamlessM4Tv2Encoder(config, self.shared) self.speech_encoder = SeamlessM4Tv2SpeechEncoder(config) self.text_decoder = SeamlessM4Tv2Decoder(config, self.shared) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) # Initialize weights and apply final processing self.post_init() self.current_modality = current_modality if current_modality == "speech": self.main_input_name = "input_features" # these models already call post_init in their initialization self.t2u_model = SeamlessM4Tv2TextToUnitForConditionalGeneration(config) self.vocoder = SeamlessM4Tv2CodeHifiGan(config) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TModel.set_modality def set_modality(self, modality="text"): if modality == "text": self.main_input_name = "input_ids" self.current_modality = "text" elif modality == "speech": self.main_input_name = "input_features" self.current_modality = "speech" else: raise ValueError(f"`modality={modality}` is not a valid modality. It must be `text` or `speech`.") # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TModel.get_encoder def get_encoder(self): if self.current_modality == "text": return self.text_encoder else: return self.speech_encoder # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TModel.get_input_embeddings def get_input_embeddings(self): return self.text_decoder.embed_tokens # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TModel.set_input_embeddings def set_input_embeddings(self, value): self.text_encoder.embed_tokens = value self.text_decoder.embed_tokens = value self.shared = value # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TModel._tie_weights def _tie_weights(self): if self.config.tie_word_embeddings: self._tie_or_clone_weights(self.text_encoder.embed_tokens, self.shared) self._tie_or_clone_weights(self.text_decoder.embed_tokens, self.shared) self._tie_or_clone_weights(self.lm_head, self.shared) @auto_docstring(custom_args=SEAMLESS_M4T_V2_COMMON_CUSTOM_ARGS) # Copied from transformers.models.seamless_m4t.modeling_seamless_m4t.SeamlessM4TModel.forward with SeamlessM4T->SeamlessM4Tv2 def forward( self, input_ids: Optional[torch.LongTensor] = None, input_features: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, encoder_outputs: Optional[tuple[tuple[torch.FloatTensor]]] = None, past_key_values: Optional[tuple[tuple[torch.FloatTensor]]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, decoder_inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, **kwargs, ) -> Union[Seq2SeqLMOutput, tuple[torch.FloatTensor]]: r""" labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict if labels is not None: if use_cache: logger.warning("The `use_cache` argument is changed to `False` since `labels` is provided.") use_cache = False if decoder_input_ids is None and decoder_inputs_embeds is None: decoder_input_ids = shift_tokens_right( labels, self.config.pad_token_id, self.config.decoder_start_token_id ) if input_ids is None and input_features is None and inputs_embeds is None and encoder_outputs is None: raise ValueError( "`input_ids`,`input_features`, `inputs_embeds` and `encoder_outputs` are all empty. Make sure at least one of them is not." ) elif input_features is not None: if input_ids is not None: logger.warning( "`input_ids` is not `None` but `input_features` has been given." "`input_features` will be used in priority through the `speech_encoder`. " "Make sure that `input_features` and `input_ids` are mutually exclusive." ) if inputs_embeds is not None: logger.warning( "`inputs_embeds` is not `None` but `input_features` has been given." "`input_features` will be used in priority through `speech_encoder`. " "`inputs_embeds` will be ignored." ) # if encoder_outputs is not None, it's probably used within a .generate method so no need to warn logger.warning( "This calls the same method `forward` as `SeamlessM4Tv2ForTextToText` and `SeamlessM4Tv2ForSpeechToText`" "depending on the input modality. If you want to generate speech, use the `generate` method." ) self.set_modality("speech") encoder_outputs = self.speech_encoder( input_features=input_features, attention_mask=attention_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) elif input_ids is not None or inputs_embeds is not None: # if encoder_outputs is not None, it's probably used within a .generate method so no need to warn logger.warning( "This calls the same method `forward` as `SeamlessM4Tv2ForTextToText` and `SeamlessM4Tv2ForSpeechToText`" "depending on the input modality. If you want to generate speech, use the `generate` method." ) self.set_modality("text") encoder_outputs = self.text_encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): encoder_outputs = BaseModelOutput( last_hidden_state=encoder_outputs[0], hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, ) encoder_attention_mask = attention_mask # input modality = speech so new attention mask if self.current_modality == "speech" and attention_mask is not None: sub_sampled_lengths = self._compute_sub_sample_lengths_from_attention_mask(attention_mask).to( encoder_outputs[0].device ) encoder_attention_mask = _compute_new_attention_mask( hidden_states=encoder_outputs[0], seq_lens=sub_sampled_lengths ) # decoder outputs consists of (dec_features, past_key_values, dec_hidden, dec_attn) decoder_outputs = self.text_decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=encoder_attention_mask, past_key_values=past_key_values, inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) lm_logits = self.lm_head(decoder_outputs[0]) masked_lm_loss = None if labels is not None: loss_fct = CrossEntropyLoss() labels = labels.to(lm_logits.device) masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) if not return_dict: outputs = decoder_outputs + encoder_outputs output = (lm_logits,) + outputs[1:] return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output return Seq2SeqLMOutput( loss=masked_lm_loss, logits=lm_logits, past_key_values=decoder_outputs.past_key_values, decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, cross_attentions=decoder_outputs.cross_attentions, encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, encoder_attentions=encoder_outputs.attentions, ) @torch.no_grad() def generate( self, input_ids: Optional[torch.Tensor] = None, input_features: Optional[torch.Tensor] = None, return_intermediate_token_ids: Optional[bool] = None, tgt_lang: Optional[str] = None, speaker_id: Optional[int] = 0, generate_speech: Optional[bool] = True, **kwargs, ) -> Union[torch.Tensor, SeamlessM4Tv2GenerationOutput]: """ Generates translated token ids and/or translated audio waveforms. <Tip> This method successively calls the `.generate` function of two different sub-models. You can specify keyword arguments at two different levels: general arguments that will be passed to both models, or prefixed arguments that will be passed to one of them. For example, calling `.generate(input_ids=input_ids, num_beams=4, speech_do_sample=True)` will successively perform beam-search decoding on the text model, and multinomial beam-search sampling on the speech model. For an overview of generation strategies and code examples, check out the [following guide](./generation_strategies). </Tip> Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Indices of input sequence tokens in the vocabulary. Indices can be obtained using [`SeamlessM4TTokenizer`] or [`SeamlessM4TProcessor`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length, num_banks)`, *optional*): Input audio features. This should be returned by the [`SeamlessM4TFeatureExtractor`] class or the [`SeamlessM4TProcessor`] class. See [`SeamlessM4TFeatureExtractor.__call__`] for details. return_intermediate_token_ids (`bool`, *optional*): If `True`, also returns the intermediate generated text and unit tokens. Set to `True` if you also want to get translated text alongside the audio. Note that if `generate_speech=True`, this parameter will be ignored. tgt_lang (`str`, *optional*): The language to use as target language for translation. speaker_id (`int`, *optional*, defaults to 0): The id of the speaker used for speech synthesis. Must be lower than `config.vocoder_num_spkrs`. generate_speech (`bool`, *optional*, defaults to `True`): If `False`, will only returns the text tokens and won't generate speech. kwargs (*optional*): Remaining dictioy of keyword arguments that will be passed to [`GenerationMixin.generate`]. Keyword arguments are of two types: - Without a prefix, they will be entered as `**kwargs` for the `generate` method of each sub-model, except for `decoder_input_ids` which will only be passed through the text components. - With a *text_* or *speech_* prefix, they will be input for the `generate` method of the text model and speech model respectively. It has the priority over the keywords without a prefix. This means you can, for example, specify a generation strategy for one generation but not for the other. Returns: `Union[SeamlessM4Tv2GenerationOutput, tuple[Tensor], ModelOutput]`: - If `generate_speech` and `return_intermediate_token_ids`, returns [`SeamlessM4Tv2GenerationOutput`]. - If `generate_speech` and not `return_intermediate_token_ids`, returns a tuple composed of waveforms of shape `(batch_size, sequence_length)` and `waveform_lengths` which gives the length of each sample. - If `generate_speech=False`, it will returns `ModelOutput`. """ if input_ids is None and input_features is None and kwargs.get("inputs_embeds") is None: raise ValueError( "`input_ids`,`input_features` and `inputs_embeds` are all empty. Make sure at least one of them is not." ) if generate_speech and tgt_lang is None: raise ValueError("You must specify a `tgt_lang` to generate translated speech.") if tgt_lang is not None: # also accept __xxx__ tgt_lang = tgt_lang.replace("__", "") if generate_speech: keys_to_check = ["text_decoder_lang_to_code_id", "t2u_lang_code_to_id", "vocoder_lang_code_to_id"] else: keys_to_check = ["text_decoder_lang_to_code_id"] for key in keys_to_check: lang_code_to_id = getattr(self.generation_config, key, None) if lang_code_to_id is None: raise ValueError( f"""This model generation config doesn't have a `{key}` key which maps the target language to the right token id. Make sure to load the right generation config.""" ) elif tgt_lang not in lang_code_to_id: raise ValueError( f"""`tgt_lang={tgt_lang}` is not supported by this model. Please specify a `tgt_lang` in {",".join(lang_code_to_id.keys())}. Note that SeamlessM4Tv2 supports more languages for text translation than for speech synthesis.""" ) batch_size = ( len(input_features) if input_features is not None else (len(input_ids) if input_ids is not None else len(kwargs.get("inputs_embeds"))) ) kwargs_text, kwargs_speech = format_speech_generation_kwargs(kwargs) kwargs_text["output_hidden_states"] = True kwargs_text["return_dict_in_generate"] = True kwargs_text["output_scores"] = True text_decoder_input_ids = kwargs_text.get("decoder_input_ids") # overwrite text_decoder_input_ids if tgt_lang is passed. The latter gets priority over decoder_input_ids. if tgt_lang is not None: # tgt_lang gets priority over decoder input ids text_tgt_lang_id = self.generation_config.text_decoder_lang_to_code_id.get(tgt_lang) text_decoder_input_ids = torch.tensor([[text_tgt_lang_id]] * batch_size, device=self.device) kwargs_text["decoder_input_ids"] = text_decoder_input_ids # first generation if input_features is not None: self.set_modality("speech") if input_ids is not None: logger.warning( "`input_features` and `input_ids` are both non empty. `input_features` will be used in priority " "through the speech encoder. Make sure `input_features=None` if you want to use the text encoder." ) text_generation_output = super().generate(input_features=input_features, **kwargs_text) else: self.set_modality("text") text_generation_output = super().generate(input_ids=input_ids, input_features=None, **kwargs_text) sequences = text_generation_output.sequences if not generate_speech: return text_generation_output # prepare second generation num_return_sequences = len(sequences) // batch_size attention_mask = kwargs_speech.get("attention_mask", kwargs_text.get("attention_mask", None)) # get encoder last hidden states if self.current_modality == "speech": # get last_hidden_state from encoder - must do a pass through the speech encoder encoder_hidden_states = self.speech_encoder( input_features=input_features, attention_mask=attention_mask ).last_hidden_state # input modality = speech so new attention mask for the decoder if attention_mask is not None: sub_sampled_lengths = self._compute_sub_sample_lengths_from_attention_mask(attention_mask).to( encoder_hidden_states.device ) attention_mask = _compute_new_attention_mask( hidden_states=encoder_hidden_states, seq_lens=sub_sampled_lengths ) else: encoder_hidden_states = text_generation_output.encoder_hidden_states[-1] if attention_mask is not None: # repeat attention mask alongside batch dimension attention_mask = torch.repeat_interleave(attention_mask, num_return_sequences, dim=0) # repeat attention mask alongside batch dimension encoder_hidden_states = torch.repeat_interleave(encoder_hidden_states, num_return_sequences, dim=0) # get decoder last hidden state - must do a pass through the text decoder t2u_input_embeds = self.text_decoder( input_ids=sequences[:, :-1], # Manually trim the final EOS token encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=attention_mask, ).last_hidden_state pad_token_id = self.generation_config.pad_token_id # Compute new attention mask seq_lens = (sequences[:, :-1] != pad_token_id).int().sum(1) t2u_model_attention_mask = _compute_new_attention_mask(t2u_input_embeds, seq_lens) kwargs_speech["attention_mask"] = t2u_model_attention_mask # REMOVE EOS and lang_id t2u_input_ids = sequences[:, 2:-1] # replace every other EOS t2u_input_ids = torch.masked_fill( t2u_input_ids, t2u_input_ids == self.generation_config.eos_token_id, pad_token_id ) # compute t2u_char_input_ids t2u_subwords = self._indices_to_subwords(t2u_input_ids) t2u_char_count_per_id = self._count_character_length_in_subword( t2u_input_ids, t2u_subwords, pad_token_id=pad_token_id ) # Add pads for lang, EOS tokens as per NLLB "source" tokenizer mode. pad_zero = t2u_char_count_per_id.new_zeros((t2u_char_count_per_id.shape[0], 1)) t2u_char_count_per_id = torch.cat([pad_zero, t2u_char_count_per_id, pad_zero], dim=1) t2u_char_input_ids = self._get_char_input_ids( t2u_input_ids, t2u_subwords, t2u_char_count_per_id, pad_token_id=pad_token_id ) # second pass t2u_output = self.t2u_model( inputs_embeds=t2u_input_embeds, char_input_ids=t2u_char_input_ids, char_count_per_id=t2u_char_count_per_id, **kwargs_speech, ) t2u_logits = t2u_output[0] padding_mask = t2u_output[1].bool() # The text-to-unit model is non auto-regressive. We keep the ability to use sampling with temperature temperature = kwargs_speech.get("temperature", None) if (temperature is None or temperature == 1.0) or not kwargs_speech.get("do_sample", False): unit_ids = t2u_logits.argmax(dim=-1) else: t2u_logits = t2u_logits / temperature # apply softmax probs = nn.functional.softmax(t2u_logits, dim=-1) # reshape to 2D: (batch_size, seq_len, t2u_vocab_size) -> (batch_size*seq_len, t2u_vocab_size) probs = probs.reshape((-1, probs.shape[2])) # multinomial then reshape : (batch_size*seq_len)-> (batch_size,seq_len) unit_ids = torch.multinomial(probs, num_samples=1).view(t2u_logits.shape[0], -1) output_unit_ids = unit_ids.detach().clone() replace_mask = (unit_ids == self.config.t2u_eos_token_id) | (~padding_mask) # replace eos per pad unit_ids = unit_ids.masked_fill(replace_mask, self.config.t2u_pad_token_id) # offset of control symbols unit_ids = torch.where( unit_ids == self.config.t2u_pad_token_id, unit_ids, unit_ids - self.config.vocoder_offset ) vocoder_tgt_lang_id = self.generation_config.vocoder_lang_code_to_id.get(tgt_lang) vocoder_tgt_lang_id = torch.tensor([[vocoder_tgt_lang_id]] * len(unit_ids), device=self.device) speaker_id = torch.tensor([[speaker_id]] * len(unit_ids), device=self.device) waveform, waveform_lengths = self.vocoder( input_ids=unit_ids, speaker_id=speaker_id, lang_id=vocoder_tgt_lang_id ) if return_intermediate_token_ids: return SeamlessM4Tv2GenerationOutput( waveform=waveform, waveform_lengths=waveform_lengths, sequences=sequences, unit_sequences=output_unit_ids, ) return waveform, waveform_lengths __all__ = [ "SeamlessM4Tv2ForTextToSpeech", "SeamlessM4Tv2ForSpeechToSpeech", "SeamlessM4Tv2ForTextToText", "SeamlessM4Tv2ForSpeechToText", "SeamlessM4Tv2Model", "SeamlessM4Tv2PreTrainedModel", ]
transformers/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py/0
{ "file_path": "transformers/src/transformers/models/seamless_m4t_v2/modeling_seamless_m4t_v2.py", "repo_id": "transformers", "token_count": 91631 }
539
# coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Convert SegGPT checkpoints from the original repository. URL: https://github.com/baaivision/Painter/tree/main/SegGPT """ import argparse import requests import torch from PIL import Image from transformers import SegGptConfig, SegGptForImageSegmentation, SegGptImageProcessor from transformers.utils import logging logging.set_verbosity_info() logger = logging.get_logger(__name__) # here we list all keys to be renamed (original name on the left, our name on the right) def create_rename_keys(config): rename_keys = [] # fmt: off # rename embedding and its parameters rename_keys.append(("patch_embed.proj.weight", "model.embeddings.patch_embeddings.projection.weight")) rename_keys.append(("patch_embed.proj.bias", "model.embeddings.patch_embeddings.projection.bias")) rename_keys.append(("mask_token", "model.embeddings.mask_token")) rename_keys.append(("segment_token_x", "model.embeddings.segment_token_input")) rename_keys.append(("segment_token_y", "model.embeddings.segment_token_prompt")) rename_keys.append(("type_token_cls", "model.embeddings.type_token_semantic")) rename_keys.append(("type_token_ins", "model.embeddings.type_token_instance")) rename_keys.append(("pos_embed", "model.embeddings.position_embeddings")) # rename decoder and other rename_keys.append(("norm.weight", "model.encoder.layernorm.weight")) rename_keys.append(("norm.bias", "model.encoder.layernorm.bias")) rename_keys.append(("decoder_embed.weight", "decoder.decoder_embed.weight")) rename_keys.append(("decoder_embed.bias", "decoder.decoder_embed.bias")) rename_keys.append(("decoder_pred.0.weight", "decoder.decoder_pred.conv.weight")) rename_keys.append(("decoder_pred.0.bias", "decoder.decoder_pred.conv.bias")) rename_keys.append(("decoder_pred.1.weight", "decoder.decoder_pred.layernorm.weight")) rename_keys.append(("decoder_pred.1.bias", "decoder.decoder_pred.layernorm.bias")) rename_keys.append(("decoder_pred.3.weight", "decoder.decoder_pred.head.weight")) rename_keys.append(("decoder_pred.3.bias", "decoder.decoder_pred.head.bias")) # rename blocks for i in range(config.num_hidden_layers): rename_keys.append((f"blocks.{i}.attn.qkv.weight", f"model.encoder.layers.{i}.attention.qkv.weight")) rename_keys.append((f"blocks.{i}.attn.qkv.bias", f"model.encoder.layers.{i}.attention.qkv.bias")) rename_keys.append((f"blocks.{i}.attn.proj.weight", f"model.encoder.layers.{i}.attention.proj.weight")) rename_keys.append((f"blocks.{i}.attn.proj.bias", f"model.encoder.layers.{i}.attention.proj.bias")) rename_keys.append((f"blocks.{i}.attn.rel_pos_h", f"model.encoder.layers.{i}.attention.rel_pos_h")) rename_keys.append((f"blocks.{i}.attn.rel_pos_w", f"model.encoder.layers.{i}.attention.rel_pos_w")) rename_keys.append((f"blocks.{i}.mlp.fc1.weight", f"model.encoder.layers.{i}.mlp.lin1.weight")) rename_keys.append((f"blocks.{i}.mlp.fc1.bias", f"model.encoder.layers.{i}.mlp.lin1.bias")) rename_keys.append((f"blocks.{i}.mlp.fc2.weight", f"model.encoder.layers.{i}.mlp.lin2.weight")) rename_keys.append((f"blocks.{i}.mlp.fc2.bias", f"model.encoder.layers.{i}.mlp.lin2.bias")) rename_keys.append((f"blocks.{i}.norm1.weight", f"model.encoder.layers.{i}.layernorm_before.weight")) rename_keys.append((f"blocks.{i}.norm1.bias", f"model.encoder.layers.{i}.layernorm_before.bias")) rename_keys.append((f"blocks.{i}.norm2.weight", f"model.encoder.layers.{i}.layernorm_after.weight")) rename_keys.append((f"blocks.{i}.norm2.bias", f"model.encoder.layers.{i}.layernorm_after.bias")) # fmt: on return rename_keys def rename_key(dct, old, new): val = dct.pop(old) dct[new] = val # We will verify our results on spongebob images def prepare_input(): image_input_url = ( "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_2.jpg" ) image_prompt_url = ( "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_1.jpg" ) mask_prompt_url = ( "https://raw.githubusercontent.com/baaivision/Painter/main/SegGPT/SegGPT_inference/examples/hmbb_1_target.png" ) image_input = Image.open(requests.get(image_input_url, stream=True).raw) image_prompt = Image.open(requests.get(image_prompt_url, stream=True).raw) mask_prompt = Image.open(requests.get(mask_prompt_url, stream=True).raw) return image_input, image_prompt, mask_prompt @torch.no_grad() def convert_seggpt_checkpoint(args): model_name = args.model_name pytorch_dump_folder_path = args.pytorch_dump_folder_path verify_logits = args.verify_logits push_to_hub = args.push_to_hub # Define default GroundingDINO configuration config = SegGptConfig() # Load original checkpoint checkpoint_url = "https://huggingface.co/BAAI/SegGpt/blob/main/seggpt_vit_large.pth" original_state_dict = torch.hub.load_state_dict_from_url(checkpoint_url, map_location="cpu")["model"] # # Rename keys new_state_dict = original_state_dict.copy() rename_keys = create_rename_keys(config) for src, dest in rename_keys: rename_key(new_state_dict, src, dest) # Load HF model model = SegGptForImageSegmentation(config) model.eval() missing_keys, unexpected_keys = model.load_state_dict(new_state_dict, strict=False) print("Missing keys:", missing_keys) print("Unexpected keys:", unexpected_keys) input_img, prompt_img, prompt_mask = prepare_input() image_processor = SegGptImageProcessor() inputs = image_processor(images=input_img, prompt_images=prompt_img, prompt_masks=prompt_mask, return_tensors="pt") expected_prompt_pixel_values = torch.tensor( [ [[-0.6965, -0.6965, -0.6965], [-0.6965, -0.6965, -0.6965], [-0.6965, -0.6965, -0.6965]], [[1.6583, 1.6583, 1.6583], [1.6583, 1.6583, 1.6583], [1.6583, 1.6583, 1.6583]], [[2.3088, 2.3088, 2.3088], [2.3088, 2.3088, 2.3088], [2.3088, 2.3088, 2.3088]], ] ) expected_pixel_values = torch.tensor( [ [[1.6324, 1.6153, 1.5810], [1.6153, 1.5982, 1.5810], [1.5810, 1.5639, 1.5639]], [[1.2731, 1.2556, 1.2206], [1.2556, 1.2381, 1.2031], [1.2206, 1.2031, 1.1681]], [[1.6465, 1.6465, 1.6465], [1.6465, 1.6465, 1.6465], [1.6291, 1.6291, 1.6291]], ] ) expected_prompt_masks = torch.tensor( [ [[-2.1179, -2.1179, -2.1179], [-2.1179, -2.1179, -2.1179], [-2.1179, -2.1179, -2.1179]], [[-2.0357, -2.0357, -2.0357], [-2.0357, -2.0357, -2.0357], [-2.0357, -2.0357, -2.0357]], [[-1.8044, -1.8044, -1.8044], [-1.8044, -1.8044, -1.8044], [-1.8044, -1.8044, -1.8044]], ] ) assert torch.allclose(inputs.pixel_values[0, :, :3, :3], expected_pixel_values, atol=1e-4) assert torch.allclose(inputs.prompt_pixel_values[0, :, :3, :3], expected_prompt_pixel_values, atol=1e-4) assert torch.allclose(inputs.prompt_masks[0, :, :3, :3], expected_prompt_masks, atol=1e-4) torch.manual_seed(2) outputs = model(**inputs) print(outputs) if verify_logits: expected_output = torch.tensor( [ [[-2.1208, -2.1190, -2.1198], [-2.1237, -2.1228, -2.1227], [-2.1232, -2.1226, -2.1228]], [[-2.0405, -2.0396, -2.0403], [-2.0434, -2.0434, -2.0433], [-2.0428, -2.0432, -2.0434]], [[-1.8102, -1.8088, -1.8099], [-1.8131, -1.8126, -1.8129], [-1.8130, -1.8128, -1.8131]], ] ) assert torch.allclose(outputs.pred_masks[0, :, :3, :3], expected_output, atol=1e-4) print("Looks good!") else: print("Converted without verifying logits") if pytorch_dump_folder_path is not None: print(f"Saving model and processor for {model_name} to {pytorch_dump_folder_path}") model.save_pretrained(pytorch_dump_folder_path) image_processor.save_pretrained(pytorch_dump_folder_path) if push_to_hub: print(f"Pushing model and processor for {model_name} to hub") model.push_to_hub(f"EduardoPacheco/{model_name}") image_processor.push_to_hub(f"EduardoPacheco/{model_name}") if __name__ == "__main__": parser = argparse.ArgumentParser() # Required parameters parser.add_argument( "--model_name", default="seggpt-vit-large", type=str, choices=["seggpt-vit-large"], help="Name of the SegGpt model you'd like to convert.", ) parser.add_argument( "--pytorch_dump_folder_path", default=None, type=str, help="Path to the output PyTorch model directory." ) parser.add_argument( "--verify_logits", action="store_false", help="Whether or not to verify the logits against the original implementation.", ) parser.add_argument( "--push_to_hub", action="store_true", help="Whether or not to push the converted model to the 🤗 hub." ) args = parser.parse_args() convert_seggpt_checkpoint(args)
transformers/src/transformers/models/seggpt/convert_seggpt_to_hf.py/0
{ "file_path": "transformers/src/transformers/models/seggpt/convert_seggpt_to_hf.py", "repo_id": "transformers", "token_count": 4275 }
540
# coding=utf-8 # Copyright 2025 Google Inc. HuggingFace Inc. team. All rights reserved. # # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass from typing import Optional, Union import torch import torch.utils.checkpoint from ...cache_utils import Cache from ...modeling_outputs import ImageClassifierOutputWithNoAttention from ...modeling_utils import PreTrainedModel from ...utils import ( auto_docstring, logging, ) from ..auto import AutoModelForImageTextToText from .configuration_shieldgemma2 import ShieldGemma2Config logger = logging.get_logger(__name__) @dataclass class ShieldGemma2ImageClassifierOutputWithNoAttention(ImageClassifierOutputWithNoAttention): """ShieldGemma2 classifies imags as violative or not relative to a specific policy Args: """ probabilities: Optional[torch.Tensor] = None @auto_docstring class ShieldGemma2ForImageClassification(PreTrainedModel): config: ShieldGemma2Config _checkpoint_conversion_mapping = { "model.language_model.model": "model.model.language_model", "model.vision_tower": "model.model.vision_tower", "model.multi_modal_projector": "model.model.multi_modal_projector", "model.language_model.lm_head": "model.lm_head", } def __init__(self, config: ShieldGemma2Config): super().__init__(config=config) self.yes_token_index = getattr(config, "yes_token_index", 10_784) self.no_token_index = getattr(config, "no_token_index", 3771) self.model = AutoModelForImageTextToText.from_config(config=config) def get_input_embeddings(self): return self.model.language_model.get_input_embeddings() def set_input_embeddings(self, value): self.model.language_model.set_input_embeddings(value) def get_output_embeddings(self): return self.model.language_model.get_output_embeddings() def set_output_embeddings(self, new_embeddings): self.model.language_model.set_output_embeddings(new_embeddings) def set_decoder(self, decoder): self.model.language_model.set_decoder(decoder) def get_decoder(self): return self.model.language_model.get_decoder() def tie_weights(self): return self.model.language_model.tie_weights() @auto_docstring def forward( self, input_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[Union[list[torch.FloatTensor], Cache]] = None, token_type_ids: Optional[torch.LongTensor] = None, cache_position: Optional[torch.LongTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, logits_to_keep: Union[int, torch.Tensor] = 0, **lm_kwargs, ) -> ShieldGemma2ImageClassifierOutputWithNoAttention: r""" Returns: A `ShieldGemma2ImageClassifierOutputWithNoAttention` instance containing the logits and probabilities associated with the model predicting the `Yes` or `No` token as the response to that prompt, captured in the following properties. * `logits` (`torch.Tensor` of shape `(batch_size, 2)`): The first position along dim=1 is the logits for the `Yes` token and the second position along dim=1 is the logits for the `No` token. * `probabilities` (`torch.Tensor` of shape `(batch_size, 2)`): The first position along dim=1 is the probability of predicting the `Yes` token and the second position along dim=1 is the probability of predicting the `No` token. ShieldGemma prompts are constructed such that predicting the `Yes` token means the content *does violate* the policy as described. If you are only interested in the violative condition, use `violated = outputs.probabilities[:, 1]` to extract that slice from the output tensors. When used with the `ShieldGemma2Processor`, the `batch_size` will be equal to `len(images) * len(policies)`, and the order within the batch will be img1_policy1, ... img1_policyN, ... imgM_policyN. """ outputs = self.model( input_ids=input_ids, pixel_values=pixel_values, attention_mask=attention_mask, position_ids=position_ids, past_key_values=past_key_values, token_type_ids=token_type_ids, cache_position=cache_position, inputs_embeds=inputs_embeds, labels=labels, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, logits_to_keep=logits_to_keep, **lm_kwargs, ) logits = outputs.logits selected_logits = logits[:, -1, [self.yes_token_index, self.no_token_index]] probabilities = torch.softmax(selected_logits, dim=-1) return ShieldGemma2ImageClassifierOutputWithNoAttention( logits=selected_logits, probabilities=probabilities, ) __all__ = [ "ShieldGemma2ForImageClassification", ]
transformers/src/transformers/models/shieldgemma2/modeling_shieldgemma2.py/0
{ "file_path": "transformers/src/transformers/models/shieldgemma2/modeling_shieldgemma2.py", "repo_id": "transformers", "token_count": 2402 }
541
# coding=utf-8 # Copyright 2025 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Optional import torch import torch.nn as nn import torch.nn.functional as F from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss from transformers.models.siglip.configuration_siglip import SiglipConfig, SiglipTextConfig, SiglipVisionConfig from transformers.models.siglip.modeling_siglip import ( BaseModelOutput, BaseModelOutputWithPooling, ImageClassifierOutput, SiglipForImageClassification, SiglipModel, SiglipMultiheadAttentionPoolingHead, SiglipOutput, SiglipPreTrainedModel, SiglipTextModel, SiglipTextModelOutput, SiglipVisionModel, SiglipVisionModelOutput, SiglipVisionTransformer, ) from ...modeling_attn_mask_utils import _prepare_4d_attention_mask class Siglip2TextConfig(SiglipTextConfig): pass class Siglip2VisionConfig(SiglipVisionConfig): r""" This is the configuration class to store the configuration of a [`Siglip2VisionModel`]. It is used to instantiate a Siglip2 vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the vision encoder of the Siglip2 [google/siglip2-base-patch16-naflex](https://huggingface.co/google/siglip2-base-patch16-naflex) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. num_channels (`int`, *optional*, defaults to 3): Number of channels in the input images. num_patches (`int`, *optional*, defaults to 256): The number of patches in the image with the size of (`patch_size`, `patch_size`). The image is resized to fill maximum of this number of patches, and to preserve the aspect ratio. In case the resulted number of patches is lower, the image is padded in "patch" dimension. patch_size (`int`, *optional*, defaults to 16): The size (resolution) of each patch. hidden_act (`str` or `function`, *optional*, defaults to `"gelu_pytorch_tanh"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` `"quick_gelu"` are supported. layer_norm_eps (`float`, *optional*, defaults to 1e-06): The epsilon used by the layer normalization layers. attention_dropout (`float`, *optional*, defaults to 0.0): The dropout ratio for the attention probabilities. Example: ```python >>> from transformers import Siglip2VisionConfig, Siglip2VisionModel >>> # Initializing a Siglip2VisionConfig with google/siglip2-base-patch16-naflex style configuration >>> configuration = Siglip2VisionConfig() >>> # Initializing a Siglip2VisionModel (with random weights) from the google/siglip2-base-patch16-naflex style configuration >>> model = Siglip2VisionModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```""" def __init__( self, hidden_size=768, intermediate_size=3072, num_hidden_layers=12, num_attention_heads=12, num_channels=3, num_patches=256, patch_size=16, hidden_act="gelu_pytorch_tanh", layer_norm_eps=1e-6, attention_dropout=0.0, **kwargs, ): super().__init__(**kwargs) self.num_patches = num_patches del self.image_size class Siglip2Config(SiglipConfig): pass class Siglip2VisionOutput(SiglipVisionModelOutput): pass class Siglip2TextOutput(SiglipTextModelOutput): pass class Siglip2Output(SiglipOutput): pass class Siglip2VisionEmbeddings(nn.Module): def __init__(self, config: Siglip2VisionConfig): super().__init__() self.config = config self.embed_dim = config.hidden_size self.patch_size = config.patch_size self.patch_embedding = nn.Linear( in_features=config.num_channels * self.patch_size * self.patch_size, out_features=self.embed_dim, ) self.num_patches = config.num_patches self.position_embedding_size = int(self.num_patches**0.5) self.position_embedding = nn.Embedding(self.num_patches, self.embed_dim) @staticmethod def resize_positional_embeddings( positional_embeddings: torch.Tensor, spatial_shapes: torch.LongTensor, max_length: int, ) -> torch.Tensor: """ Resize positional embeddings to image-specific size and pad to a fixed size. Args: positional_embeddings (`torch.Tensor`): Position embeddings of shape (height, width, embed_dim) spatial_shapes (`torch.LongTensor`): Spatial shapes of shape (batch_size, 2) to resize the positional embeddings to max_length (`int`): Maximum length of the positional embeddings to pad resized positional embeddings to Returns: `torch.Tensor`: Embeddings of shape (batch_size, max_length, embed_dim) """ batch_size = spatial_shapes.shape[0] embed_dim = positional_embeddings.shape[-1] source_dtype = positional_embeddings.dtype resulted_positional_embeddings = torch.empty( (batch_size, max_length, embed_dim), device=positional_embeddings.device, dtype=source_dtype, ) # (height, width, embed_dim) -> (1, embed_dim, height, width) for interpolation positional_embeddings = positional_embeddings.permute(2, 0, 1).unsqueeze(0) # Upcast to float32 on CPU because antialias is not supported for bfloat16/float16 on CPU if positional_embeddings.device.type == "cpu": positional_embeddings = positional_embeddings.to(torch.float32) for i in range(batch_size): # (1, dim, height, width) -> (1, dim, target_height, target_width) height, width = spatial_shapes[i] resized_embeddings = F.interpolate( positional_embeddings, size=(height, width), mode="bilinear", align_corners=False, antialias=True, ) # (1, dim, target_height, target_width) -> (target_height * target_width, dim) resized_embeddings = resized_embeddings.reshape(embed_dim, height * width).transpose(0, 1) # Cast to original dtype resized_embeddings = resized_embeddings.to(source_dtype) resulted_positional_embeddings[i, : height * width] = resized_embeddings resulted_positional_embeddings[i, height * width :] = resized_embeddings[0] return resulted_positional_embeddings def forward(self, pixel_values: torch.FloatTensor, spatial_shapes: torch.LongTensor) -> torch.Tensor: """ Args: pixel_values (`torch.FloatTensor`): Pixel values of shape (batch_size, max_num_patches, num_channels * patch_size * patch_size) spatial_shapes (`list[tuple[int, int]]`): Spatial shapes of shape (batch_size, 2) to resize the positional embeddings to """ # Apply patch embeddings to already patchified pixel values target_dtype = self.patch_embedding.weight.dtype patch_embeds = self.patch_embedding(pixel_values.to(dtype=target_dtype)) # Get positional resized and padded positional embeddings positional_embeddings = self.position_embedding.weight.reshape( self.position_embedding_size, self.position_embedding_size, -1 ) resized_positional_embeddings = self.resize_positional_embeddings( positional_embeddings, spatial_shapes, max_length=pixel_values.shape[1] ) # Add positional embeddings to patch embeddings embeddings = patch_embeds + resized_positional_embeddings return embeddings class Siglip2VisionTransformer(SiglipVisionTransformer): def __init__(self, config: Siglip2VisionConfig): super().__init__() self._use_flash_attention_2 = config._attn_implementation == "flash_attention_2" # Update: add `spatial_shapes` and `attention_mask` def forward( self, pixel_values: torch.FloatTensor, attention_mask: torch.Tensor, spatial_shapes: torch.LongTensor, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, ) -> BaseModelOutputWithPooling: r""" spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`): Tensor containing the spatial dimensions (height, width) of the input images. """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) hidden_states = self.embeddings(pixel_values, spatial_shapes) if attention_mask is not None and not self._use_flash_attention_2: # [batch_size, seq_len] -> [batch_size, 1, tgt_seq_len, src_seq_len] encoder_attention_mask = _prepare_4d_attention_mask(attention_mask, hidden_states.dtype) else: encoder_attention_mask = attention_mask encoder_outputs: BaseModelOutput = self.encoder( inputs_embeds=hidden_states, attention_mask=encoder_attention_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) last_hidden_state = encoder_outputs.last_hidden_state last_hidden_state = self.post_layernorm(last_hidden_state) pooler_output = self.head(last_hidden_state, attention_mask) if self.use_head else None return BaseModelOutputWithPooling( last_hidden_state=last_hidden_state, pooler_output=pooler_output, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, ) class Siglip2PreTrainedModel(SiglipPreTrainedModel): pass class Siglip2TextModel(SiglipTextModel): pass class Siglip2MultiheadAttentionPoolingHead(SiglipMultiheadAttentionPoolingHead): def __init__(self, config: Siglip2VisionConfig): super().__init__(config) self.num_heads = config.num_attention_heads def forward(self, hidden_state: torch.Tensor, attention_mask: Optional[torch.Tensor] = None) -> torch.Tensor: batch_size = hidden_state.shape[0] probe = self.probe.repeat(batch_size, 1, 1) if attention_mask is not None: target_len, source_len = probe.shape[1], hidden_state.shape[1] attention_mask = _prepare_4d_attention_mask(attention_mask, hidden_state.dtype, target_len) attention_mask = attention_mask.repeat(1, self.num_heads, target_len, 1) attention_mask = attention_mask.reshape(-1, target_len, source_len) hidden_state = self.attention(probe, hidden_state, hidden_state, attn_mask=attention_mask)[0] residual = hidden_state hidden_state = self.layernorm(hidden_state) hidden_state = residual + self.mlp(hidden_state) return hidden_state[:, 0] class Siglip2VisionModel(SiglipVisionModel): # Update: add `spatial_shapes` and `pixel_attention_mask` def forward( self, pixel_values: torch.FloatTensor, pixel_attention_mask: torch.Tensor, spatial_shapes: torch.LongTensor, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, ) -> BaseModelOutputWithPooling: r""" pixel_attention_mask (`torch.Tensor` of shape `(batch_size, image_size, image_size)`, *optional*): Mask to avoid performing attention on padding pixel indices. spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`): Tensor containing the spatial dimensions (height, width) of the input images. Examples: ```python >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, Siglip2VisionModel >>> model = Siglip2VisionModel.from_pretrained("google/siglip2-base-patch16-224") >>> processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooled_output = outputs.pooler_output # pooled features ```""" return self.vision_model( pixel_values=pixel_values, attention_mask=pixel_attention_mask, spatial_shapes=spatial_shapes, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) class Siglip2Model(SiglipModel): # Update: add `spatial_shapes` and `pixel_attention_mask` def get_image_features( self, pixel_values: Optional[torch.FloatTensor] = None, pixel_attention_mask: Optional[torch.Tensor] = None, spatial_shapes: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, ) -> torch.FloatTensor: r""" pixel_attention_mask (`torch.Tensor` of shape `(batch_size, image_size, image_size)`, *optional*): Mask to avoid performing attention on padding pixel indices. spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`): Tensor containing the spatial dimensions (height, width) of the input images. Returns: image_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The image embeddings obtained by applying the projection layer to the pooled output of [`Siglip2VisionModel`]. Examples: ```python >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, AutoModel >>> import torch >>> model = AutoModel.from_pretrained("google/siglip2-base-patch16-224") >>> processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt") >>> with torch.no_grad(): ... image_features = model.get_image_features(**inputs) ``` """ # Use Siglip2Model's config for some fields (if specified) instead of those of vision & text components. output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) vision_outputs: BaseModelOutputWithPooling = self.vision_model( pixel_values=pixel_values, attention_mask=pixel_attention_mask, spatial_shapes=spatial_shapes, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) pooled_output = vision_outputs.pooler_output return pooled_output # Update: add `spatial_shapes` and `pixel_attention_mask` def forward( self, input_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_attention_mask: Optional[torch.Tensor] = None, spatial_shapes: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, return_loss: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, ) -> Siglip2Output: r""" pixel_attention_mask (`torch.Tensor` of shape `(batch_size, image_size, image_size)`, *optional*): Mask to avoid performing attention on padding pixel indices. spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`): Tensor containing the spatial dimensions (height, width) of the input images. return_loss (`bool`, *optional*): Whether or not to return the contrastive loss. Examples: ```python >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, AutoModel >>> import torch >>> model = AutoModel.from_pretrained("google/siglip2-base-patch16-224") >>> processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> texts = ["a photo of 2 cats", "a photo of 2 dogs"] >>> # important: we pass `padding=max_length` since the model was trained with this >>> inputs = processor(text=texts, images=image, padding="max_length", return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = torch.sigmoid(logits_per_image) # these are the probabilities >>> print(f"{probs[0][0]:.1%} that image 0 is '{texts[0]}'") 31.9% that image 0 is 'a photo of 2 cats' ``` """ # Use Siglip2 model's config for some fields (if specified) instead of those of vision & text components. output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) vision_outputs: BaseModelOutputWithPooling = self.vision_model( pixel_values=pixel_values, attention_mask=pixel_attention_mask, spatial_shapes=spatial_shapes, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) text_outputs: BaseModelOutputWithPooling = self.text_model( input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) image_embeds = vision_outputs.pooler_output text_embeds = text_outputs.pooler_output # normalized features image_embeds = image_embeds / image_embeds.norm(p=2, dim=-1, keepdim=True) text_embeds = text_embeds / text_embeds.norm(p=2, dim=-1, keepdim=True) # cosine similarity as logits logits_per_text = torch.matmul(text_embeds, image_embeds.t().to(text_embeds.device)) logit_scale, logit_bias = self.logit_scale.to(text_embeds.device), self.logit_bias.to(text_embeds.device) logits_per_text = logits_per_text * logit_scale.exp() + logit_bias logits_per_image = logits_per_text.t() loss = None if return_loss: # Adapted from https://github.com/google-research/big_vision/blob/01edb81a4716f93a48be43b3a4af14e29cdb3a7f/big_vision/trainers/proj/image_text/siglip2.py#L287 eye = torch.eye(logits_per_text.size(0), device=logits_per_text.device) m1_diag1 = -torch.ones_like(logits_per_text) + 2 * eye loglik = torch.nn.functional.logsigmoid(m1_diag1 * logits_per_text) nll = -torch.sum(loglik, dim=-1) loss = nll.mean() return Siglip2Output( loss=loss, logits_per_image=logits_per_image, logits_per_text=logits_per_text, text_embeds=text_embeds, image_embeds=image_embeds, text_model_output=text_outputs, vision_model_output=vision_outputs, ) class Siglip2ForImageClassification(SiglipForImageClassification): # Update: add `spatial_shapes` and `pixel_attention_mask` def forward( self, pixel_values: Optional[torch.Tensor] = None, pixel_attention_mask: Optional[torch.Tensor] = None, spatial_shapes: Optional[torch.LongTensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, ) -> ImageClassifierOutput: r""" pixel_attention_mask (`torch.Tensor` of shape `(batch_size, image_size, image_size)`, *optional*): Mask to avoid performing attention on padding pixel indices. spatial_shapes (`torch.LongTensor` of shape `(batch_size, 2)`): Tensor containing the spatial dimensions (height, width) of the input images. labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the image classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). Examples: ```python >>> from transformers import AutoImageProcessor, Siglip2ForImageClassification >>> import torch >>> from PIL import Image >>> import requests >>> torch.manual_seed(3) # doctest: +IGNORE_RESULT >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> # note: we are loading a `Siglip2Model` from the hub here, >>> # so the head will be randomly initialized, hence the predictions will be random if seed is not set above. >>> image_processor = AutoImageProcessor.from_pretrained("google/siglip2-base-patch16-224") >>> model = Siglip2ForImageClassification.from_pretrained("google/siglip2-base-patch16-224") >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> logits = outputs.logits >>> # model predicts one of the two classes >>> predicted_class_idx = logits.argmax(-1).item() >>> print("Predicted class:", model.config.id2label[predicted_class_idx]) Predicted class: LABEL_1 ``` """ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) outputs: BaseModelOutputWithPooling = self.vision_model( pixel_values, attention_mask=pixel_attention_mask, spatial_shapes=spatial_shapes, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) sequence_output = outputs.last_hidden_state # average pool the patch tokens if pixel_attention_mask is not None: pool_mask = pixel_attention_mask[..., None].to(sequence_output.device) sequence_output = torch.sum(sequence_output * pool_mask, dim=1) / torch.sum(pool_mask, dim=1) else: sequence_output = torch.mean(sequence_output, dim=1) # apply classifier logits = self.classifier(sequence_output) loss = None if labels is not None: # move labels to correct device to enable model parallelism labels = labels.to(logits.device) if self.config.problem_type is None: if self.num_labels == 1: self.config.problem_type = "regression" elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): self.config.problem_type = "single_label_classification" else: self.config.problem_type = "multi_label_classification" if self.config.problem_type == "regression": loss_fct = MSELoss() if self.num_labels == 1: loss = loss_fct(logits.squeeze(), labels.squeeze()) else: loss = loss_fct(logits, labels) elif self.config.problem_type == "single_label_classification": loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) elif self.config.problem_type == "multi_label_classification": loss_fct = BCEWithLogitsLoss() loss = loss_fct(logits, labels) return ImageClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) __all__ = [ "Siglip2Config", "Siglip2TextConfig", "Siglip2VisionConfig", "Siglip2Model", "Siglip2PreTrainedModel", "Siglip2TextModel", "Siglip2VisionModel", "Siglip2ForImageClassification", ]
transformers/src/transformers/models/siglip2/modular_siglip2.py/0
{ "file_path": "transformers/src/transformers/models/siglip2/modular_siglip2.py", "repo_id": "transformers", "token_count": 11166 }
542
# coding=utf-8 # Copyright 2021 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Convert Wav2Vec2 checkpoint.""" import argparse import fairseq import torch from torch import nn from transformers import ( MBart50Tokenizer, MBartConfig, MBartForCausalLM, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel, Wav2Vec2Config, Wav2Vec2FeatureExtractor, Wav2Vec2Model, logging, ) logging.set_verbosity_info() logger = logging.get_logger(__name__) MAPPING = { "post_extract_proj": "feature_projection.projection", "encoder.pos_conv.0": "encoder.pos_conv_embed.conv", "self_attn.k_proj": "encoder.layers.*.attention.k_proj", "self_attn.v_proj": "encoder.layers.*.attention.v_proj", "self_attn.q_proj": "encoder.layers.*.attention.q_proj", "self_attn.out_proj": "encoder.layers.*.attention.out_proj", "self_attn_layer_norm": "encoder.layers.*.layer_norm", "fc1": "encoder.layers.*.feed_forward.intermediate_dense", "fc2": "encoder.layers.*.feed_forward.output_dense", "final_layer_norm": "encoder.layers.*.final_layer_norm", "encoder.layer_norm": "encoder.layer_norm", "w2v_model.layer_norm": "feature_projection.layer_norm", "quantizer.weight_proj": "quantizer.weight_proj", "quantizer.vars": "quantizer.codevectors", "project_q": "project_q", "final_proj": "project_hid", "w2v_encoder.proj": "lm_head", "mask_emb": "masked_spec_embed", } TOP_LEVEL_KEYS = [ "lm_head", "quantizer.weight_proj", "quantizer.codevectors", "project_q", "project_hid", ] def set_recursively(hf_pointer, key, value, full_name, weight_type): for attribute in key.split("."): hf_pointer = getattr(hf_pointer, attribute) if weight_type is not None: hf_shape = getattr(hf_pointer, weight_type).shape else: hf_shape = hf_pointer.shape assert hf_shape == value.shape, ( f"Shape of hf {key + '.' + weight_type if weight_type is not None else ''} is {hf_shape}, but should be" f" {value.shape} for {full_name}" ) if weight_type == "weight": hf_pointer.weight.data = value elif weight_type == "weight_g": hf_pointer.weight_g.data = value elif weight_type == "weight_v": hf_pointer.weight_v.data = value elif weight_type == "bias": hf_pointer.bias.data = value else: hf_pointer.data = value logger.info(f"{key + '.' + weight_type if weight_type is not None else ''} was initialized from {full_name}.") def recursively_load_weights_wav2vec2(fairseq_model, hf_model): unused_weights = [] fairseq_dict = fairseq_model.state_dict() feature_extractor = hf_model.feature_extractor adapter = hf_model.adapter for name, value in fairseq_dict.items(): is_used = False if "conv_layers" in name: load_conv_layer( name, value, feature_extractor, unused_weights, hf_model.config.feat_extract_norm == "group", ) is_used = True elif any(x in name for x in ["adaptor", "w2v_encoder.proj.", "w2v_proj_ln."]): load_adapter(name, value, adapter, unused_weights) is_used = True else: for key, mapped_key in MAPPING.items(): if key in name or key.split("w2v_model.")[-1] == name.split(".")[0]: is_used = True if "*" in mapped_key: layer_index = name.split(key)[0].split(".")[-2] mapped_key = mapped_key.replace("*", layer_index) if "weight_g" in name: weight_type = "weight_g" elif "weight_v" in name: weight_type = "weight_v" elif "bias" in name: weight_type = "bias" elif "weight" in name: weight_type = "weight" else: weight_type = None set_recursively(hf_model, mapped_key, value, name, weight_type) continue if not is_used: unused_weights.append(name) logger.warning(f"Unused weights: {unused_weights}") def load_conv_layer(full_name, value, feature_extractor, unused_weights, use_group_norm): name = full_name.split("conv_layers.")[-1] items = name.split(".") layer_id = int(items[0]) type_id = int(items[1]) if type_id == 0: if "bias" in name: assert value.shape == feature_extractor.conv_layers[layer_id].conv.bias.data.shape, ( f"{full_name} has size {value.shape}, but" f" {feature_extractor.conv_layers[layer_id].conv.bias.data.shape} was found." ) feature_extractor.conv_layers[layer_id].conv.bias.data = value logger.info(f"Feat extract conv layer {layer_id} was initialized from {full_name}.") elif "weight" in name: assert value.shape == feature_extractor.conv_layers[layer_id].conv.weight.data.shape, ( f"{full_name} has size {value.shape}, but" f" {feature_extractor.conv_layers[layer_id].conv.weight.data.shape} was found." ) feature_extractor.conv_layers[layer_id].conv.weight.data = value logger.info(f"Feat extract conv layer {layer_id} was initialized from {full_name}.") elif (type_id == 2 and not use_group_norm) or (type_id == 2 and layer_id == 0 and use_group_norm): if "bias" in name: assert value.shape == feature_extractor.conv_layers[layer_id].layer_norm.bias.data.shape, ( f"{full_name} has size {value.shape}, but {feature_extractor[layer_id].layer_norm.bias.data.shape} was" " found." ) feature_extractor.conv_layers[layer_id].layer_norm.bias.data = value logger.info(f"Feat extract layer norm weight of layer {layer_id} was initialized from {full_name}.") elif "weight" in name: assert value.shape == feature_extractor.conv_layers[layer_id].layer_norm.weight.data.shape, ( f"{full_name} has size {value.shape}, but" f" {feature_extractor[layer_id].layer_norm.weight.data.shape} was found." ) feature_extractor.conv_layers[layer_id].layer_norm.weight.data = value logger.info(f"Feat extract layer norm weight of layer {layer_id} was initialized from {full_name}.") else: unused_weights.append(full_name) def load_adapter(full_name, value, adapter, unused_weights): name = full_name.split("adaptor.")[-1] items = name.split(".") if items[1].isdigit(): layer_id = int(items[1]) else: layer_id = None if "adaptor" not in full_name: if "proj_ln" in full_name: # has to be layer norm if "bias" in name: assert value.shape == adapter.proj_layer_norm.bias.data.shape, ( f"{full_name} has size {value.shape}, but {adapter.proj_layer_norm.bias.data.shape} was found." ) adapter.proj_layer_norm.bias.data = value logger.info(f"Adapter proj layer norm bias was initialized from {full_name}.") if "weight" in name: assert value.shape == adapter.proj_layer_norm.weight.data.shape, ( f"{full_name} has size {value.shape}, but {adapter.proj_layer_norm.weight.data.shape} was found." ) adapter.proj_layer_norm.weight.data = value else: # has to be projection layer if "bias" in name: assert value.shape == adapter.proj.bias.data.shape, ( f"{full_name} has size {value.shape}, but {adapter.proj.bias.data.shape} was found." ) adapter.proj.bias.data = value logger.info(f"Adapter proj layer bias was initialized from {full_name}.") if "weight" in name: assert value.shape == adapter.proj.weight.data.shape, ( f"{full_name} has size {value.shape}, but {adapter.proj.weight.data.shape} was found." ) adapter.proj.weight.data = value logger.info(f"Adapter proj layer weight was initialized from {full_name}.") elif isinstance(layer_id, int): if "bias" in name: assert value.shape == adapter.layers[layer_id].conv.bias.data.shape, ( f"{full_name} has size {value.shape}, but {adapter.layers[layer_id].conv.bias.data.shape} was found." ) adapter.layers[layer_id].conv.bias.data = value logger.info(f"Adapter layer {layer_id} bias was initialized from {full_name}.") elif "weight" in name: assert value.shape == adapter.layers[layer_id].conv.weight.data.shape, ( f"{full_name} has size {value.shape}, but {adapter.layers[layer_id].conv.weight.data.shape} was found." ) adapter.layers[layer_id].conv.weight.data = value logger.info(f"Adapter layer {layer_id} bias was initialized from {full_name}.") else: unused_weights.append(full_name) def make_linear_from_emb(emb): vocab_size, emb_size = emb.weight.shape lin_layer = nn.Linear(vocab_size, emb_size, bias=False) lin_layer.weight.data = emb.weight.data return lin_layer @torch.no_grad() def convert_wav2vec2_checkpoint( checkpoint_path, pytorch_dump_folder_path, dict_path, config_yaml_path, encoder_config_path, decoder_config_path, add_adapter, adapter_kernel_size, adapter_stride, decoder_start_token_id, encoder_output_dim, ): """ Copy/paste/tweak model's weights to transformers design. """ # load configs encoder_config = Wav2Vec2Config.from_pretrained( encoder_config_path, add_adapter=True, adapter_stride=adapter_stride, adapter_kernel_size=adapter_kernel_size, token_token=True, output_hidden_size=encoder_output_dim, ) decoder_config = MBartConfig.from_pretrained(decoder_config_path) # load model model, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task( [checkpoint_path], arg_overrides={ "config_yaml": config_yaml_path, "data": "/".join(dict_path.split("/")[:-1]), "w2v_path": checkpoint_path, "load_pretrained_decoder_from": None, }, ) model = model[0].eval() # load feature extractor feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(encoder_config_path, token_token=True) # set weights for wav2vec2 encoder hf_encoder = Wav2Vec2Model(encoder_config) recursively_load_weights_wav2vec2(model.encoder, hf_encoder) # load decoder weights hf_decoder = MBartForCausalLM(decoder_config) missing_keys, unexpected_keys = hf_decoder.model.decoder.load_state_dict(model.decoder.state_dict(), strict=False) logger.warning(f"The following keys are missing when loading the decoder weights: {missing_keys}") logger.warning(f"The following keys are unexpected when loading the decoder weights: {unexpected_keys}") hf_wav2vec = SpeechEncoderDecoderModel(encoder=hf_encoder, decoder=hf_decoder) hf_wav2vec.config.tie_word_embeddings = False tokenizer = MBart50Tokenizer(dict_path) tokenizer.save_pretrained(pytorch_dump_folder_path) config = hf_wav2vec.config.to_dict() config["pad_token_id"] = tokenizer.pad_token_id config["bos_token_id"] = tokenizer.bos_token_id config["eos_token_id"] = tokenizer.eos_token_id config["tokenizer_class"] = "mbart50" config["feature_extractor_type"] = "wav2vec2" config["decoder_start_token_id"] = tokenizer.eos_token_id config["forced_bos_token_id"] = 250004 config["forced_eos_token_id"] = tokenizer.eos_token_id hf_wav2vec.config = SpeechEncoderDecoderConfig.from_dict(config) hf_wav2vec.save_pretrained(pytorch_dump_folder_path) feature_extractor.save_pretrained(pytorch_dump_folder_path) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--pytorch_dump_folder_path", default=None, type=str, help="Path to the output PyTorch model.") parser.add_argument("--checkpoint_path", default=None, type=str, help="Path to fairseq checkpoint") parser.add_argument("--dict_path", default=None, type=str, help="Path to dict of fine-tuned model") parser.add_argument("--config_yaml_path", default=None, type=str, help="Path to yaml file of fine-tuned model") parser.add_argument( "--encoder_config_path", default="facebook/wav2vec2-xls-r-1b", type=str, help="Path to hf encoder wav2vec2 checkpoint config", ) parser.add_argument( "--decoder_config_path", default="facebook/mbart-large-50-one-to-many-mmt", type=str, help="Path to hf decoder checkpoint config", ) parser.add_argument("--add_adapter", default=True, type=bool, help="whether to add model adapter layers") parser.add_argument("--adapter_stride", default=2, type=int, help="stride of adapter layers") parser.add_argument("--adapter_kernel_size", default=3, type=int, help="kernel size of adapter layers") parser.add_argument("--encoder_output_dim", default=1024, type=int, help="encoder output dim") parser.add_argument("--start_token_id", default=250004, type=int, help="`decoder_start_token_id` of model config") args = parser.parse_args() convert_wav2vec2_checkpoint( args.checkpoint_path, args.pytorch_dump_folder_path, args.dict_path, args.config_yaml_path, encoder_config_path=args.encoder_config_path, decoder_config_path=args.decoder_config_path, add_adapter=args.add_adapter, adapter_kernel_size=args.adapter_kernel_size, adapter_stride=args.adapter_stride, decoder_start_token_id=args.start_token_id, encoder_output_dim=args.encoder_output_dim, )
transformers/src/transformers/models/speech_encoder_decoder/convert_mbart_wav2vec2_seq2seq_original_to_pytorch.py/0
{ "file_path": "transformers/src/transformers/models/speech_encoder_decoder/convert_mbart_wav2vec2_seq2seq_original_to_pytorch.py", "repo_id": "transformers", "token_count": 6582 }
543
# coding=utf-8 # Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Feature extractor class for SpeechT5.""" import warnings from typing import Any, Optional, Union import numpy as np from ...audio_utils import mel_filter_bank, optimal_fft_length, spectrogram, window_function from ...feature_extraction_sequence_utils import SequenceFeatureExtractor from ...feature_extraction_utils import BatchFeature from ...utils import PaddingStrategy, TensorType, logging logger = logging.get_logger(__name__) class SpeechT5FeatureExtractor(SequenceFeatureExtractor): r""" Constructs a SpeechT5 feature extractor. This class can pre-process a raw speech signal by (optionally) normalizing to zero-mean unit-variance, for use by the SpeechT5 speech encoder prenet. This class can also extract log-mel filter bank features from raw speech, for use by the SpeechT5 speech decoder prenet. This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: feature_size (`int`, *optional*, defaults to 1): The feature dimension of the extracted features. sampling_rate (`int`, *optional*, defaults to 16000): The sampling rate at which the audio files should be digitalized expressed in hertz (Hz). padding_value (`float`, *optional*, defaults to 0.0): The value that is used to fill the padding values. do_normalize (`bool`, *optional*, defaults to `False`): Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly improve the performance for some models. num_mel_bins (`int`, *optional*, defaults to 80): The number of mel-frequency bins in the extracted spectrogram features. hop_length (`int`, *optional*, defaults to 16): Number of ms between windows. Otherwise referred to as "shift" in many papers. win_length (`int`, *optional*, defaults to 64): Number of ms per window. win_function (`str`, *optional*, defaults to `"hann_window"`): Name for the window function used for windowing, must be accessible via `torch.{win_function}` frame_signal_scale (`float`, *optional*, defaults to 1.0): Constant multiplied in creating the frames before applying DFT. This argument is deprecated. fmin (`float`, *optional*, defaults to 80): Minimum mel frequency in Hz. fmax (`float`, *optional*, defaults to 7600): Maximum mel frequency in Hz. mel_floor (`float`, *optional*, defaults to 1e-10): Minimum value of mel frequency banks. reduction_factor (`int`, *optional*, defaults to 2): Spectrogram length reduction factor. This argument is deprecated. return_attention_mask (`bool`, *optional*, defaults to `True`): Whether or not [`~SpeechT5FeatureExtractor.__call__`] should return `attention_mask`. """ model_input_names = ["input_values", "attention_mask"] def __init__( self, feature_size: int = 1, sampling_rate: int = 16000, padding_value: float = 0.0, do_normalize: bool = False, num_mel_bins: int = 80, hop_length: int = 16, win_length: int = 64, win_function: str = "hann_window", frame_signal_scale: float = 1.0, fmin: float = 80, fmax: float = 7600, mel_floor: float = 1e-10, reduction_factor: int = 2, return_attention_mask: bool = True, **kwargs, ): super().__init__(feature_size=feature_size, sampling_rate=sampling_rate, padding_value=padding_value, **kwargs) self.do_normalize = do_normalize self.return_attention_mask = return_attention_mask self.num_mel_bins = num_mel_bins self.hop_length = hop_length self.win_length = win_length self.win_function = win_function self.frame_signal_scale = frame_signal_scale self.fmin = fmin self.fmax = fmax self.mel_floor = mel_floor self.reduction_factor = reduction_factor self.sample_size = win_length * sampling_rate // 1000 self.sample_stride = hop_length * sampling_rate // 1000 self.n_fft = optimal_fft_length(self.sample_size) self.n_freqs = (self.n_fft // 2) + 1 self.window = window_function(window_length=self.sample_size, name=self.win_function, periodic=True) self.mel_filters = mel_filter_bank( num_frequency_bins=self.n_freqs, num_mel_filters=self.num_mel_bins, min_frequency=self.fmin, max_frequency=self.fmax, sampling_rate=self.sampling_rate, norm="slaney", mel_scale="slaney", ) if frame_signal_scale != 1.0: warnings.warn( "The argument `frame_signal_scale` is deprecated and will be removed in version 4.30.0 of Transformers", FutureWarning, ) if reduction_factor != 2.0: warnings.warn( "The argument `reduction_factor` is deprecated and will be removed in version 4.30.0 of Transformers", FutureWarning, ) @staticmethod # Copied from transformers.models.wav2vec2.feature_extraction_wav2vec2.Wav2Vec2FeatureExtractor.zero_mean_unit_var_norm def zero_mean_unit_var_norm( input_values: list[np.ndarray], attention_mask: list[np.ndarray], padding_value: float = 0.0 ) -> list[np.ndarray]: """ Every array in the list is normalized to have zero mean and unit variance """ if attention_mask is not None: attention_mask = np.array(attention_mask, np.int32) normed_input_values = [] for vector, length in zip(input_values, attention_mask.sum(-1)): normed_slice = (vector - vector[:length].mean()) / np.sqrt(vector[:length].var() + 1e-7) if length < normed_slice.shape[0]: normed_slice[length:] = padding_value normed_input_values.append(normed_slice) else: normed_input_values = [(x - x.mean()) / np.sqrt(x.var() + 1e-7) for x in input_values] return normed_input_values def _extract_mel_features( self, one_waveform: np.ndarray, ) -> np.ndarray: """ Extracts log-mel filterbank features for one waveform array (unbatched). """ log_mel_spec = spectrogram( one_waveform, window=self.window, frame_length=self.sample_size, hop_length=self.sample_stride, fft_length=self.n_fft, mel_filters=self.mel_filters, mel_floor=self.mel_floor, log_mel="log10", ) return log_mel_spec.T def __call__( self, audio: Optional[Union[np.ndarray, list[float], list[np.ndarray], list[list[float]]]] = None, audio_target: Optional[Union[np.ndarray, list[float], list[np.ndarray], list[list[float]]]] = None, padding: Union[bool, str, PaddingStrategy] = False, max_length: Optional[int] = None, truncation: bool = False, pad_to_multiple_of: Optional[int] = None, return_attention_mask: Optional[bool] = None, return_tensors: Optional[Union[str, TensorType]] = None, sampling_rate: Optional[int] = None, **kwargs, ) -> BatchFeature: """ Main method to featurize and prepare for the model one or several sequence(s). Pass in a value for `audio` to extract waveform features. Pass in a value for `audio_target` to extract log-mel spectrogram features. Args: audio (`np.ndarray`, `list[float]`, `list[np.ndarray]`, `list[list[float]]`, *optional*): The sequence or batch of sequences to be processed. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values. This outputs waveform features. Must be mono channel audio, not stereo, i.e. single float per timestep. audio_target (`np.ndarray`, `list[float]`, `list[np.ndarray]`, `list[list[float]]`, *optional*): The sequence or batch of sequences to be processed as targets. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values. This outputs log-mel spectrogram features. padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`): Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among: - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided). - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different lengths). max_length (`int`, *optional*): Maximum length of the returned list and optionally padding length (see above). truncation (`bool`): Activates truncation to cut input sequences longer than *max_length* to *max_length*. pad_to_multiple_of (`int`, *optional*): If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability `>= 7.5` (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128. return_attention_mask (`bool`, *optional*): Whether to return the attention mask. If left to the default, will return the attention mask according to the specific feature_extractor's default. [What are attention masks?](../glossary#attention-mask) return_tensors (`str` or [`~utils.TensorType`], *optional*): If set, will return tensors instead of list of python integers. Acceptable values are: - `'tf'`: Return TensorFlow `tf.constant` objects. - `'pt'`: Return PyTorch `torch.Tensor` objects. - `'np'`: Return Numpy `np.ndarray` objects. sampling_rate (`int`, *optional*): The sampling rate at which the `audio` or `audio_target` input was sampled. It is strongly recommended to pass `sampling_rate` at the forward call to prevent silent errors. """ if audio is None and audio_target is None: raise ValueError("You must provide either `audio` or `audio_target` values.") if sampling_rate is not None: if sampling_rate != self.sampling_rate: raise ValueError( f"The model corresponding to this feature extractor: {self} was trained using a sampling rate of" f" {self.sampling_rate}. Please make sure that the provided audio input was sampled with" f" {self.sampling_rate} and not {sampling_rate}." ) else: logger.warning( f"It is strongly recommended to pass the `sampling_rate` argument to `{self.__class__.__name__}()`. " "Failing to do so can result in silent errors that might be hard to debug." ) if audio is not None: inputs = self._process_audio( audio, False, padding, max_length, truncation, pad_to_multiple_of, return_attention_mask, return_tensors, **kwargs, ) else: inputs = None if audio_target is not None: inputs_target = self._process_audio( audio_target, True, padding, max_length, truncation, pad_to_multiple_of, return_attention_mask, return_tensors, **kwargs, ) if inputs is None: return inputs_target else: inputs["labels"] = inputs_target["input_values"] decoder_attention_mask = inputs_target.get("attention_mask") if decoder_attention_mask is not None: inputs["decoder_attention_mask"] = decoder_attention_mask return inputs def _process_audio( self, speech: Union[np.ndarray, list[float], list[np.ndarray], list[list[float]]], is_target: bool = False, padding: Union[bool, str, PaddingStrategy] = False, max_length: Optional[int] = None, truncation: bool = False, pad_to_multiple_of: Optional[int] = None, return_attention_mask: Optional[bool] = None, return_tensors: Optional[Union[str, TensorType]] = None, **kwargs, ) -> BatchFeature: is_batched_numpy = isinstance(speech, np.ndarray) and len(speech.shape) > 1 if is_batched_numpy and len(speech.shape) > 2: raise ValueError(f"Only mono-channel audio is supported for input to {self}") is_batched = is_batched_numpy or ( isinstance(speech, (list, tuple)) and (isinstance(speech[0], (np.ndarray, tuple, list))) ) if is_batched: speech = [np.asarray(speech, dtype=np.float32) for speech in speech] elif not is_batched and not isinstance(speech, np.ndarray): speech = np.asarray(speech, dtype=np.float32) elif isinstance(speech, np.ndarray) and speech.dtype is np.dtype(np.float64): speech = speech.astype(np.float32) # always return batch if not is_batched: speech = [speech] # needed to make pad() work on spectrogram inputs feature_size_hack = self.feature_size # convert into correct format for padding if is_target: features = [self._extract_mel_features(waveform) for waveform in speech] encoded_inputs = BatchFeature({"input_values": features}) self.feature_size = self.num_mel_bins else: encoded_inputs = BatchFeature({"input_values": speech}) padded_inputs = self.pad( encoded_inputs, padding=padding, max_length=max_length, truncation=truncation, pad_to_multiple_of=pad_to_multiple_of, return_attention_mask=return_attention_mask, **kwargs, ) self.feature_size = feature_size_hack # convert input values to correct format input_values = padded_inputs["input_values"] if not isinstance(input_values[0], np.ndarray): padded_inputs["input_values"] = [np.asarray(array, dtype=np.float32) for array in input_values] elif ( not isinstance(input_values, np.ndarray) and isinstance(input_values[0], np.ndarray) and input_values[0].dtype is np.dtype(np.float64) ): padded_inputs["input_values"] = [array.astype(np.float32) for array in input_values] elif isinstance(input_values, np.ndarray) and input_values.dtype is np.dtype(np.float64): padded_inputs["input_values"] = input_values.astype(np.float32) # convert attention_mask to correct format attention_mask = padded_inputs.get("attention_mask") if attention_mask is not None: padded_inputs["attention_mask"] = [np.asarray(array, dtype=np.int32) for array in attention_mask] # zero-mean and unit-variance normalization if not is_target and self.do_normalize: attention_mask = ( attention_mask if self._get_padding_strategies(padding, max_length=max_length) is not PaddingStrategy.DO_NOT_PAD else None ) padded_inputs["input_values"] = self.zero_mean_unit_var_norm( padded_inputs["input_values"], attention_mask=attention_mask, padding_value=self.padding_value ) if return_tensors is not None: padded_inputs = padded_inputs.convert_to_tensors(return_tensors) return padded_inputs def to_dict(self) -> dict[str, Any]: output = super().to_dict() # Don't serialize these as they are derived from the other properties. names = ["window", "mel_filters", "sample_size", "sample_stride", "n_fft", "n_freqs"] for name in names: if name in output: del output[name] return output __all__ = ["SpeechT5FeatureExtractor"]
transformers/src/transformers/models/speecht5/feature_extraction_speecht5.py/0
{ "file_path": "transformers/src/transformers/models/speecht5/feature_extraction_speecht5.py", "repo_id": "transformers", "token_count": 7626 }
544
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """PyTorch SuperPoint model.""" from dataclasses import dataclass from typing import Optional, Union import torch from torch import nn from transformers import PreTrainedModel from transformers.modeling_outputs import ( BaseModelOutputWithNoAttention, ) from transformers.models.superpoint.configuration_superpoint import SuperPointConfig from ...utils import ( ModelOutput, auto_docstring, logging, ) logger = logging.get_logger(__name__) def remove_keypoints_from_borders( keypoints: torch.Tensor, scores: torch.Tensor, border: int, height: int, width: int ) -> tuple[torch.Tensor, torch.Tensor]: """Removes keypoints (and their associated scores) that are too close to the border""" mask_h = (keypoints[:, 0] >= border) & (keypoints[:, 0] < (height - border)) mask_w = (keypoints[:, 1] >= border) & (keypoints[:, 1] < (width - border)) mask = mask_h & mask_w return keypoints[mask], scores[mask] def top_k_keypoints(keypoints: torch.Tensor, scores: torch.Tensor, k: int) -> tuple[torch.Tensor, torch.Tensor]: """Keeps the k keypoints with highest score""" if k >= len(keypoints): return keypoints, scores scores, indices = torch.topk(scores, k, dim=0) return keypoints[indices], scores def simple_nms(scores: torch.Tensor, nms_radius: int) -> torch.Tensor: """Applies non-maximum suppression on scores""" if nms_radius < 0: raise ValueError("Expected positive values for nms_radius") def max_pool(x): return nn.functional.max_pool2d(x, kernel_size=nms_radius * 2 + 1, stride=1, padding=nms_radius) zeros = torch.zeros_like(scores) max_mask = scores == max_pool(scores) for _ in range(2): supp_mask = max_pool(max_mask.float()) > 0 supp_scores = torch.where(supp_mask, zeros, scores) new_max_mask = supp_scores == max_pool(supp_scores) max_mask = max_mask | (new_max_mask & (~supp_mask)) return torch.where(max_mask, scores, zeros) @dataclass @auto_docstring( custom_intro=""" Base class for outputs of image point description models. Due to the nature of keypoint detection, the number of keypoints is not fixed and can vary from image to image, which makes batching non-trivial. In the batch of images, the maximum number of keypoints is set as the dimension of the keypoints, scores and descriptors tensors. The mask tensor is used to indicate which values in the keypoints, scores and descriptors tensors are keypoint information and which are padding. """ ) class SuperPointKeypointDescriptionOutput(ModelOutput): r""" loss (`torch.FloatTensor` of shape `(1,)`, *optional*): Loss computed during training. keypoints (`torch.FloatTensor` of shape `(batch_size, num_keypoints, 2)`): Relative (x, y) coordinates of predicted keypoints in a given image. scores (`torch.FloatTensor` of shape `(batch_size, num_keypoints)`): Scores of predicted keypoints. descriptors (`torch.FloatTensor` of shape `(batch_size, num_keypoints, descriptor_size)`): Descriptors of predicted keypoints. mask (`torch.BoolTensor` of shape `(batch_size, num_keypoints)`): Mask indicating which values in keypoints, scores and descriptors are keypoint information. hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states (also called feature maps) of the model at the output of each stage. """ loss: Optional[torch.FloatTensor] = None keypoints: Optional[torch.IntTensor] = None scores: Optional[torch.FloatTensor] = None descriptors: Optional[torch.FloatTensor] = None mask: Optional[torch.BoolTensor] = None hidden_states: Optional[tuple[torch.FloatTensor]] = None class SuperPointConvBlock(nn.Module): def __init__( self, config: SuperPointConfig, in_channels: int, out_channels: int, add_pooling: bool = False ) -> None: super().__init__() self.conv_a = nn.Conv2d( in_channels, out_channels, kernel_size=3, stride=1, padding=1, ) self.conv_b = nn.Conv2d( out_channels, out_channels, kernel_size=3, stride=1, padding=1, ) self.relu = nn.ReLU(inplace=True) self.pool = nn.MaxPool2d(kernel_size=2, stride=2) if add_pooling else None def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_states = self.relu(self.conv_a(hidden_states)) hidden_states = self.relu(self.conv_b(hidden_states)) if self.pool is not None: hidden_states = self.pool(hidden_states) return hidden_states class SuperPointEncoder(nn.Module): """ SuperPoint encoder module. It is made of 4 convolutional layers with ReLU activation and max pooling, reducing the dimensionality of the image. """ def __init__(self, config: SuperPointConfig) -> None: super().__init__() # SuperPoint uses 1 channel images self.input_dim = 1 conv_blocks = [] conv_blocks.append( SuperPointConvBlock(config, self.input_dim, config.encoder_hidden_sizes[0], add_pooling=True) ) for i in range(1, len(config.encoder_hidden_sizes) - 1): conv_blocks.append( SuperPointConvBlock( config, config.encoder_hidden_sizes[i - 1], config.encoder_hidden_sizes[i], add_pooling=True ) ) conv_blocks.append( SuperPointConvBlock( config, config.encoder_hidden_sizes[-2], config.encoder_hidden_sizes[-1], add_pooling=False ) ) self.conv_blocks = nn.ModuleList(conv_blocks) def forward( self, input, output_hidden_states: Optional[bool] = False, return_dict: Optional[bool] = True, ) -> Union[tuple, BaseModelOutputWithNoAttention]: all_hidden_states = () if output_hidden_states else None for conv_block in self.conv_blocks: input = conv_block(input) if output_hidden_states: all_hidden_states = all_hidden_states + (input,) output = input if not return_dict: return tuple(v for v in [output, all_hidden_states] if v is not None) return BaseModelOutputWithNoAttention( last_hidden_state=output, hidden_states=all_hidden_states, ) class SuperPointInterestPointDecoder(nn.Module): """ The SuperPointInterestPointDecoder uses the output of the SuperPointEncoder to compute the keypoint with scores. The scores are first computed by a convolutional layer, then a softmax is applied to get a probability distribution over the 65 possible keypoint classes. The keypoints are then extracted from the scores by thresholding and non-maximum suppression. Post-processing is then applied to remove keypoints too close to the image borders as well as to keep only the k keypoints with highest score. """ def __init__(self, config: SuperPointConfig) -> None: super().__init__() self.keypoint_threshold = config.keypoint_threshold self.max_keypoints = config.max_keypoints self.nms_radius = config.nms_radius self.border_removal_distance = config.border_removal_distance self.relu = nn.ReLU(inplace=True) self.pool = nn.MaxPool2d(kernel_size=2, stride=2) self.conv_score_a = nn.Conv2d( config.encoder_hidden_sizes[-1], config.decoder_hidden_size, kernel_size=3, stride=1, padding=1, ) self.conv_score_b = nn.Conv2d( config.decoder_hidden_size, config.keypoint_decoder_dim, kernel_size=1, stride=1, padding=0 ) def forward(self, encoded: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: scores = self._get_pixel_scores(encoded) keypoints, scores = self._extract_keypoints(scores) return keypoints, scores def _get_pixel_scores(self, encoded: torch.Tensor) -> torch.Tensor: """Based on the encoder output, compute the scores for each pixel of the image""" scores = self.relu(self.conv_score_a(encoded)) scores = self.conv_score_b(scores) scores = nn.functional.softmax(scores, 1)[:, :-1] batch_size, _, height, width = scores.shape scores = scores.permute(0, 2, 3, 1).reshape(batch_size, height, width, 8, 8) scores = scores.permute(0, 1, 3, 2, 4).reshape(batch_size, height * 8, width * 8) scores = simple_nms(scores, self.nms_radius) return scores def _extract_keypoints(self, scores: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: """ Based on their scores, extract the pixels that represent the keypoints that will be used for descriptors computation. The keypoints are in the form of relative (x, y) coordinates. """ _, height, width = scores.shape # Threshold keypoints by score value keypoints = torch.nonzero(scores[0] > self.keypoint_threshold) scores = scores[0][tuple(keypoints.t())] # Discard keypoints near the image borders keypoints, scores = remove_keypoints_from_borders( keypoints, scores, self.border_removal_distance, height * 8, width * 8 ) # Keep the k keypoints with highest score if self.max_keypoints >= 0: keypoints, scores = top_k_keypoints(keypoints, scores, self.max_keypoints) # Convert (y, x) to (x, y) keypoints = torch.flip(keypoints, [1]).to(scores.dtype) return keypoints, scores class SuperPointDescriptorDecoder(nn.Module): """ The SuperPointDescriptorDecoder uses the outputs of both the SuperPointEncoder and the SuperPointInterestPointDecoder to compute the descriptors at the keypoints locations. The descriptors are first computed by a convolutional layer, then normalized to have a norm of 1. The descriptors are then interpolated at the keypoints locations. """ def __init__(self, config: SuperPointConfig) -> None: super().__init__() self.relu = nn.ReLU(inplace=True) self.pool = nn.MaxPool2d(kernel_size=2, stride=2) self.conv_descriptor_a = nn.Conv2d( config.encoder_hidden_sizes[-1], config.decoder_hidden_size, kernel_size=3, stride=1, padding=1, ) self.conv_descriptor_b = nn.Conv2d( config.decoder_hidden_size, config.descriptor_decoder_dim, kernel_size=1, stride=1, padding=0, ) def forward(self, encoded: torch.Tensor, keypoints: torch.Tensor) -> torch.Tensor: """Based on the encoder output and the keypoints, compute the descriptors for each keypoint""" descriptors = self.conv_descriptor_b(self.relu(self.conv_descriptor_a(encoded))) descriptors = nn.functional.normalize(descriptors, p=2, dim=1) descriptors = self._sample_descriptors(keypoints[None], descriptors[0][None], 8)[0] # [descriptor_dim, num_keypoints] -> [num_keypoints, descriptor_dim] descriptors = torch.transpose(descriptors, 0, 1) return descriptors @staticmethod def _sample_descriptors(keypoints, descriptors, scale: int = 8) -> torch.Tensor: """Interpolate descriptors at keypoint locations""" batch_size, num_channels, height, width = descriptors.shape keypoints = keypoints - scale / 2 + 0.5 divisor = torch.tensor([[(width * scale - scale / 2 - 0.5), (height * scale - scale / 2 - 0.5)]]) divisor = divisor.to(keypoints) keypoints /= divisor keypoints = keypoints * 2 - 1 # normalize to (-1, 1) kwargs = {"align_corners": True} # [batch_size, num_channels, num_keypoints, 2] -> [batch_size, num_channels, num_keypoints, 2] keypoints = keypoints.view(batch_size, 1, -1, 2) descriptors = nn.functional.grid_sample(descriptors, keypoints, mode="bilinear", **kwargs) # [batch_size, descriptor_decoder_dim, num_channels, num_keypoints] -> [batch_size, descriptor_decoder_dim, num_keypoints] descriptors = descriptors.reshape(batch_size, num_channels, -1) descriptors = nn.functional.normalize(descriptors, p=2, dim=1) return descriptors @auto_docstring class SuperPointPreTrainedModel(PreTrainedModel): config: SuperPointConfig base_model_prefix = "superpoint" main_input_name = "pixel_values" supports_gradient_checkpointing = False def _init_weights(self, module: Union[nn.Linear, nn.Conv2d, nn.LayerNorm]) -> None: """Initialize the weights""" if isinstance(module, (nn.Linear, nn.Conv2d)): # Slightly different from the TF version which uses truncated_normal for initialization # cf https://github.com/pytorch/pytorch/pull/5617 module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) def extract_one_channel_pixel_values(self, pixel_values: torch.FloatTensor) -> torch.FloatTensor: """ Assuming pixel_values has shape (batch_size, 3, height, width), and that all channels values are the same, extract the first channel value to get a tensor of shape (batch_size, 1, height, width) for SuperPoint. This is a workaround for the issue discussed in : https://github.com/huggingface/transformers/pull/25786#issuecomment-1730176446 Args: pixel_values: torch.FloatTensor of shape (batch_size, 3, height, width) Returns: pixel_values: torch.FloatTensor of shape (batch_size, 1, height, width) """ return pixel_values[:, 0, :, :][:, None, :, :] @auto_docstring( custom_intro=""" SuperPoint model outputting keypoints and descriptors. """ ) class SuperPointForKeypointDetection(SuperPointPreTrainedModel): """ SuperPoint model. It consists of a SuperPointEncoder, a SuperPointInterestPointDecoder and a SuperPointDescriptorDecoder. SuperPoint was proposed in `SuperPoint: Self-Supervised Interest Point Detection and Description <https://huggingface.co/papers/1712.07629>`__ by Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. It is a fully convolutional neural network that extracts keypoints and descriptors from an image. It is trained in a self-supervised manner, using a combination of a photometric loss and a loss based on the homographic adaptation of keypoints. It is made of a convolutional encoder and two decoders: one for keypoints and one for descriptors. """ def __init__(self, config: SuperPointConfig) -> None: super().__init__(config) self.config = config self.encoder = SuperPointEncoder(config) self.keypoint_decoder = SuperPointInterestPointDecoder(config) self.descriptor_decoder = SuperPointDescriptorDecoder(config) self.post_init() @auto_docstring def forward( self, pixel_values: torch.FloatTensor, labels: Optional[torch.LongTensor] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[tuple, SuperPointKeypointDescriptionOutput]: r""" Examples: ```python >>> from transformers import AutoImageProcessor, SuperPointForKeypointDetection >>> import torch >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> processor = AutoImageProcessor.from_pretrained("magic-leap-community/superpoint") >>> model = SuperPointForKeypointDetection.from_pretrained("magic-leap-community/superpoint") >>> inputs = processor(image, return_tensors="pt") >>> outputs = model(**inputs) ```""" loss = None if labels is not None: raise ValueError("SuperPoint does not support training for now.") output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict pixel_values = self.extract_one_channel_pixel_values(pixel_values) batch_size, _, height, width = pixel_values.shape encoder_outputs = self.encoder( pixel_values, output_hidden_states=output_hidden_states, return_dict=return_dict, ) last_hidden_state = encoder_outputs[0] list_keypoints_scores = [ self.keypoint_decoder(last_hidden_state[None, ...]) for last_hidden_state in last_hidden_state ] list_keypoints = [keypoints_scores[0] for keypoints_scores in list_keypoints_scores] list_scores = [keypoints_scores[1] for keypoints_scores in list_keypoints_scores] list_descriptors = [ self.descriptor_decoder(last_hidden_state[None, ...], keypoints[None, ...]) for last_hidden_state, keypoints in zip(last_hidden_state, list_keypoints) ] maximum_num_keypoints = max(keypoints.shape[0] for keypoints in list_keypoints) keypoints = torch.zeros((batch_size, maximum_num_keypoints, 2), device=pixel_values.device) scores = torch.zeros((batch_size, maximum_num_keypoints), device=pixel_values.device) descriptors = torch.zeros( (batch_size, maximum_num_keypoints, self.config.descriptor_decoder_dim), device=pixel_values.device, ) mask = torch.zeros((batch_size, maximum_num_keypoints), device=pixel_values.device, dtype=torch.int) for i, (_keypoints, _scores, _descriptors) in enumerate(zip(list_keypoints, list_scores, list_descriptors)): keypoints[i, : _keypoints.shape[0]] = _keypoints scores[i, : _scores.shape[0]] = _scores descriptors[i, : _descriptors.shape[0]] = _descriptors mask[i, : _scores.shape[0]] = 1 # Convert to relative coordinates keypoints = keypoints / torch.tensor([width, height], device=keypoints.device) hidden_states = encoder_outputs[1] if output_hidden_states else None if not return_dict: return tuple(v for v in [loss, keypoints, scores, descriptors, mask, hidden_states] if v is not None) return SuperPointKeypointDescriptionOutput( loss=loss, keypoints=keypoints, scores=scores, descriptors=descriptors, mask=mask, hidden_states=hidden_states, ) __all__ = ["SuperPointForKeypointDetection", "SuperPointPreTrainedModel"]
transformers/src/transformers/models/superpoint/modeling_superpoint.py/0
{ "file_path": "transformers/src/transformers/models/superpoint/modeling_superpoint.py", "repo_id": "transformers", "token_count": 7984 }
545
# coding=utf-8 # Copyright 2025 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Fast Image processor class for Swin2SR.""" from typing import Optional, Union from ...image_processing_utils import BatchFeature, ChannelDimension, get_image_size from ...image_processing_utils_fast import ( BaseImageProcessorFast, DefaultFastImageProcessorKwargs, group_images_by_shape, reorder_images, ) from ...image_utils import ImageInput from ...processing_utils import Unpack from ...utils import ( TensorType, auto_docstring, is_torch_available, is_torchvision_available, is_torchvision_v2_available, ) if is_torch_available(): import torch if is_torchvision_available(): if is_torchvision_v2_available(): from torchvision.transforms.v2 import functional as F else: from torchvision.transforms import functional as F class Swin2SRFastImageProcessorKwargs(DefaultFastImageProcessorKwargs): """ do_pad (`bool`, *optional*, defaults to `True`): Whether to pad the image to make the height and width divisible by `window_size`. pad_size (`int`, *optional*, defaults to `8`): The size of the sliding window for the local attention. """ do_pad: Optional[bool] pad_size: Optional[int] @auto_docstring class Swin2SRImageProcessorFast(BaseImageProcessorFast): do_rescale = True rescale_factor = 1 / 255 do_pad = True pad_size = 8 valid_kwargs = Swin2SRFastImageProcessorKwargs def __init__(self, **kwargs: Unpack[Swin2SRFastImageProcessorKwargs]): super().__init__(**kwargs) def preprocess(self, images: ImageInput, **kwargs: Unpack[Swin2SRFastImageProcessorKwargs]) -> BatchFeature: return super().preprocess(images, **kwargs) def pad(self, images: "torch.Tensor", size: int) -> "torch.Tensor": """ Pad an image to make the height and width divisible by `size`. Args: images (`torch.Tensor`): Images to pad. size (`int`): The size to make the height and width divisible by. Returns: `torch.Tensor`: The padded images. """ height, width = get_image_size(images, ChannelDimension.FIRST) pad_height = (height // size + 1) * size - height pad_width = (width // size + 1) * size - width return F.pad( images, (0, 0, pad_width, pad_height), padding_mode="symmetric", ) def _preprocess( self, images: list["torch.Tensor"], do_rescale: bool, rescale_factor: float, do_pad: bool, pad_size: int, disable_grouping: Optional[bool], return_tensors: Optional[Union[str, TensorType]], **kwargs, ) -> BatchFeature: grouped_images, grouped_images_index = group_images_by_shape(images, disable_grouping=disable_grouping) processed_image_grouped = {} for shape, stacked_images in grouped_images.items(): if do_rescale: stacked_images = self.rescale(stacked_images, scale=rescale_factor) if do_pad: stacked_images = self.pad(stacked_images, size=pad_size) processed_image_grouped[shape] = stacked_images processed_images = reorder_images(processed_image_grouped, grouped_images_index) processed_images = torch.stack(processed_images, dim=0) if return_tensors else processed_images return BatchFeature(data={"pixel_values": processed_images}, tensor_type=return_tensors) __all__ = ["Swin2SRImageProcessorFast"]
transformers/src/transformers/models/swin2sr/image_processing_swin2sr_fast.py/0
{ "file_path": "transformers/src/transformers/models/swin2sr/image_processing_swin2sr_fast.py", "repo_id": "transformers", "token_count": 1613 }
546
#!/usr/bin/env bash # Use this script as follows ./download_from_gcp.sh /path/to/folder/to/store/downloads folder_to_store_downloads=${1} # Replace by gcp_path to T5 cloud bucket folder here # To download the official `t5-small` model of https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints: gcp_path="gs://t5-data/pretrained_models/small" # Number of files the checkpoint is split into num_of_checks=16 # Create dir if not exist mkdir -p ${folder_to_store_downloads} # Copy all meta information files gsutil cp "${gcp_path}/operative_config.gin" ${folder_to_store_downloads} gsutil cp "${gcp_path}/checkpoint" ${folder_to_store_downloads} gsutil cp "${gcp_path}/model.ckpt-1000000.index" ${folder_to_store_downloads} gsutil cp "${gcp_path}/model.ckpt-1000000.meta" ${folder_to_store_downloads} # Copy all model weights # single digit num checkpoitns for ((i = 0 ; i < ${num_of_checks} ; i++)); do gsutil cp "${gcp_path}/model.ckpt-1000000.data-0000${i}-of-000${num_of_checks}" ${folder_to_store_downloads} done # double digit num checkpoints for ((i = 0 ; i < ${num_of_checks} ; i++)); do gsutil cp "${gcp_path}/model.ckpt-1000000.data-000${i}-of-000${num_of_checks}" ${folder_to_store_downloads} done # Having run this script, you should create a suitable config.json, *e.g.* by # looking at `https://huggingface.co/t5-small`. # Then you can run `python convert_t5_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path "${folder_to_store_downloads}" --config_file "config.json" --pytorch_dump_path "/path/to/store/pytorch/weights"
transformers/src/transformers/models/t5/download_from_gcp.sh/0
{ "file_path": "transformers/src/transformers/models/t5/download_from_gcp.sh", "repo_id": "transformers", "token_count": 579 }
547
# coding=utf-8 # Copyright 2020 Google Research and The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ TAPAS configuration. Based on the BERT configuration with added parameters. Hyperparameters are taken from run_task_main.py and hparam_utils.py of the original implementation. URLS: - https://github.com/google-research/tapas/blob/master/tapas/run_task_main.py - https://github.com/google-research/tapas/blob/master/tapas/utils/hparam_utils.py """ from ...configuration_utils import PretrainedConfig class TapasConfig(PretrainedConfig): r""" This is the configuration class to store the configuration of a [`TapasModel`]. It is used to instantiate a TAPAS model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the TAPAS [google/tapas-base-finetuned-sqa](https://huggingface.co/google/tapas-base-finetuned-sqa) architecture. Configuration objects inherit from [`PreTrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Hyperparameters additional to BERT are taken from run_task_main.py and hparam_utils.py of the original implementation. Original implementation available at https://github.com/google-research/tapas/tree/master. Args: vocab_size (`int`, *optional*, defaults to 30522): Vocabulary size of the TAPAS model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`TapasModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"swish"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 1024): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_sizes (`list[int]`, *optional*, defaults to `[3, 256, 256, 2, 256, 256, 10]`): The vocabulary sizes of the `token_type_ids` passed when calling [`TapasModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. positive_label_weight (`float`, *optional*, defaults to 10.0): Weight for positive labels. num_aggregation_labels (`int`, *optional*, defaults to 0): The number of aggregation operators to predict. aggregation_loss_weight (`float`, *optional*, defaults to 1.0): Importance weight for the aggregation loss. use_answer_as_supervision (`bool`, *optional*): Whether to use the answer as the only supervision for aggregation examples. answer_loss_importance (`float`, *optional*, defaults to 1.0): Importance weight for the regression loss. use_normalized_answer_loss (`bool`, *optional*, defaults to `False`): Whether to normalize the answer loss by the maximum of the predicted and expected value. huber_loss_delta (`float`, *optional*): Delta parameter used to calculate the regression loss. temperature (`float`, *optional*, defaults to 1.0): Value used to control (OR change) the skewness of cell logits probabilities. aggregation_temperature (`float`, *optional*, defaults to 1.0): Scales aggregation logits to control the skewness of probabilities. use_gumbel_for_cells (`bool`, *optional*, defaults to `False`): Whether to apply Gumbel-Softmax to cell selection. use_gumbel_for_aggregation (`bool`, *optional*, defaults to `False`): Whether to apply Gumbel-Softmax to aggregation selection. average_approximation_function (`string`, *optional*, defaults to `"ratio"`): Method to calculate the expected average of cells in the weak supervision case. One of `"ratio"`, `"first_order"` or `"second_order"`. cell_selection_preference (`float`, *optional*): Preference for cell selection in ambiguous cases. Only applicable in case of weak supervision for aggregation (WTQ, WikiSQL). If the total mass of the aggregation probabilities (excluding the "NONE" operator) is higher than this hyperparameter, then aggregation is predicted for an example. answer_loss_cutoff (`float`, *optional*): Ignore examples with answer loss larger than cutoff. max_num_rows (`int`, *optional*, defaults to 64): Maximum number of rows. max_num_columns (`int`, *optional*, defaults to 32): Maximum number of columns. average_logits_per_cell (`bool`, *optional*, defaults to `False`): Whether to average logits per cell. select_one_column (`bool`, *optional*, defaults to `True`): Whether to constrain the model to only select cells from a single column. allow_empty_column_selection (`bool`, *optional*, defaults to `False`): Whether to allow not to select any column. init_cell_selection_weights_to_zero (`bool`, *optional*, defaults to `False`): Whether to initialize cell selection weights to 0 so that the initial probabilities are 50%. reset_position_index_per_cell (`bool`, *optional*, defaults to `True`): Whether to restart position indexes at every cell (i.e. use relative position embeddings). disable_per_token_loss (`bool`, *optional*, defaults to `False`): Whether to disable any (strong or weak) supervision on cells. aggregation_labels (`dict[int, label]`, *optional*): The aggregation labels used to aggregate the results. For example, the WTQ models have the following aggregation labels: `{0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"}` no_aggregation_label_index (`int`, *optional*): If the aggregation labels are defined and one of these labels represents "No aggregation", this should be set to its index. For example, the WTQ models have the "NONE" aggregation label at index 0, so that value should be set to 0 for these models. Example: ```python >>> from transformers import TapasModel, TapasConfig >>> # Initializing a default (SQA) Tapas configuration >>> configuration = TapasConfig() >>> # Initializing a model from the configuration >>> model = TapasModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```""" model_type = "tapas" def __init__( self, vocab_size=30522, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=1024, type_vocab_sizes=[3, 256, 256, 2, 256, 256, 10], initializer_range=0.02, layer_norm_eps=1e-12, pad_token_id=0, positive_label_weight=10.0, num_aggregation_labels=0, aggregation_loss_weight=1.0, use_answer_as_supervision=None, answer_loss_importance=1.0, use_normalized_answer_loss=False, huber_loss_delta=None, temperature=1.0, aggregation_temperature=1.0, use_gumbel_for_cells=False, use_gumbel_for_aggregation=False, average_approximation_function="ratio", cell_selection_preference=None, answer_loss_cutoff=None, max_num_rows=64, max_num_columns=32, average_logits_per_cell=False, select_one_column=True, allow_empty_column_selection=False, init_cell_selection_weights_to_zero=False, reset_position_index_per_cell=True, disable_per_token_loss=False, aggregation_labels=None, no_aggregation_label_index=None, **kwargs, ): super().__init__(pad_token_id=pad_token_id, **kwargs) # BERT hyperparameters (with updated max_position_embeddings and type_vocab_sizes) self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.hidden_act = hidden_act self.intermediate_size = intermediate_size self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.max_position_embeddings = max_position_embeddings self.type_vocab_sizes = type_vocab_sizes self.initializer_range = initializer_range self.layer_norm_eps = layer_norm_eps # Fine-tuning task hyperparameters self.positive_label_weight = positive_label_weight self.num_aggregation_labels = num_aggregation_labels self.aggregation_loss_weight = aggregation_loss_weight self.use_answer_as_supervision = use_answer_as_supervision self.answer_loss_importance = answer_loss_importance self.use_normalized_answer_loss = use_normalized_answer_loss self.huber_loss_delta = huber_loss_delta self.temperature = temperature self.aggregation_temperature = aggregation_temperature self.use_gumbel_for_cells = use_gumbel_for_cells self.use_gumbel_for_aggregation = use_gumbel_for_aggregation self.average_approximation_function = average_approximation_function self.cell_selection_preference = cell_selection_preference self.answer_loss_cutoff = answer_loss_cutoff self.max_num_rows = max_num_rows self.max_num_columns = max_num_columns self.average_logits_per_cell = average_logits_per_cell self.select_one_column = select_one_column self.allow_empty_column_selection = allow_empty_column_selection self.init_cell_selection_weights_to_zero = init_cell_selection_weights_to_zero self.reset_position_index_per_cell = reset_position_index_per_cell self.disable_per_token_loss = disable_per_token_loss # Aggregation hyperparameters self.aggregation_labels = aggregation_labels self.no_aggregation_label_index = no_aggregation_label_index if isinstance(self.aggregation_labels, dict): self.aggregation_labels = {int(k): v for k, v in aggregation_labels.items()} __all__ = ["TapasConfig"]
transformers/src/transformers/models/tapas/configuration_tapas.py/0
{ "file_path": "transformers/src/transformers/models/tapas/configuration_tapas.py", "repo_id": "transformers", "token_count": 4588 }
548
import argparse import os import re import shutil import numpy as np import timesfm import torch from transformers import TimesFmConfig, TimesFmModelForPrediction """ Sample usage: ``` python src/transformers/models/timesfm/convert_timesfm_orignal_to_hf.py \ --output_dir /output/path ``` """ def get_nested_attr(obj, key): """Recursively retrieves an attribute from an object, handling list/tuple indexing if present.""" parts = key.split(".") for part in parts: match = re.match(r"(.*)\[(\d+)\]", part) # Handle list indexing like `layers[0]` if match: attr_name, index = match.groups() obj = getattr(obj, attr_name)[int(index)] # Access list/tuple element else: obj = getattr(obj, part) # Regular attribute access return obj def write_model(model_path, safe_serialization=True, huggingface_repo_id="google/timesfm-2.0-500m-pytorch"): os.makedirs(model_path, exist_ok=True) tmp_model_path = os.path.join(model_path, "tmp") os.makedirs(tmp_model_path, exist_ok=True) tfm = timesfm.TimesFm( hparams=timesfm.TimesFmHparams( backend="cuda" if torch.cuda.is_available() else "cpu", per_core_batch_size=32, horizon_len=128, input_patch_len=32, output_patch_len=128, num_layers=50, model_dims=1280, use_positional_embedding=False, context_len=2048, ), checkpoint=timesfm.TimesFmCheckpoint(huggingface_repo_id=huggingface_repo_id), ) timesfm_config = TimesFmConfig( patch_length=tfm.hparams.input_patch_len, context_length=tfm.hparams.context_len, horizon_length=tfm.hparams.horizon_len, num_hidden_layers=tfm.hparams.num_layers, hidden_size=tfm.hparams.model_dims, intermediate_size=tfm.hparams.model_dims, head_dim=tfm.hparams.model_dims // tfm.hparams.num_heads, num_attention_heads=tfm.hparams.num_heads, use_positional_embedding=tfm.hparams.use_positional_embedding, ) timesfm_config.save_pretrained(tmp_model_path) timesfm_model = TimesFmModelForPrediction(timesfm_config) # copy the weights from the original model to the new model making original_model = tfm._model # mapping of the layers from the original model to the transformer model MODEL_LAYER_MAPPING = { "input_ff_layer.hidden_layer[0].weight": "decoder.input_ff_layer.input_layer.weight", "input_ff_layer.hidden_layer[0].bias": "decoder.input_ff_layer.input_layer.bias", "input_ff_layer.output_layer.weight": "decoder.input_ff_layer.output_layer.weight", "input_ff_layer.output_layer.bias": "decoder.input_ff_layer.output_layer.bias", "input_ff_layer.residual_layer.weight": "decoder.input_ff_layer.residual_layer.weight", "input_ff_layer.residual_layer.bias": "decoder.input_ff_layer.residual_layer.bias", "freq_emb.weight": "decoder.freq_emb.weight", "horizon_ff_layer.hidden_layer[0].weight": "horizon_ff_layer.input_layer.weight", "horizon_ff_layer.hidden_layer[0].bias": "horizon_ff_layer.input_layer.bias", "horizon_ff_layer.output_layer.weight": "horizon_ff_layer.output_layer.weight", "horizon_ff_layer.output_layer.bias": "horizon_ff_layer.output_layer.bias", "horizon_ff_layer.residual_layer.weight": "horizon_ff_layer.residual_layer.weight", "horizon_ff_layer.residual_layer.bias": "horizon_ff_layer.residual_layer.bias", } TRANSFORMER_LAYER_MAPPING = { "stacked_transformer.layers[{i}].self_attn.qkv_proj.weight": "decoder.layers[{i}].self_attn.qkv_proj.weight", "stacked_transformer.layers[{i}].self_attn.qkv_proj.bias": "decoder.layers[{i}].self_attn.qkv_proj.bias", "stacked_transformer.layers[{i}].self_attn.o_proj.weight": "decoder.layers[{i}].self_attn.o_proj.weight", "stacked_transformer.layers[{i}].self_attn.o_proj.bias": "decoder.layers[{i}].self_attn.o_proj.bias", "stacked_transformer.layers[{i}].self_attn.scaling": "decoder.layers[{i}].self_attn.scaling", "stacked_transformer.layers[{i}].mlp.gate_proj.weight": "decoder.layers[{i}].mlp.gate_proj.weight", "stacked_transformer.layers[{i}].mlp.gate_proj.bias": "decoder.layers[{i}].mlp.gate_proj.bias", "stacked_transformer.layers[{i}].mlp.down_proj.weight": "decoder.layers[{i}].mlp.down_proj.weight", "stacked_transformer.layers[{i}].mlp.down_proj.bias": "decoder.layers[{i}].mlp.down_proj.bias", "stacked_transformer.layers[{i}].mlp.layer_norm.weight": "decoder.layers[{i}].mlp.layer_norm.weight", "stacked_transformer.layers[{i}].mlp.layer_norm.bias": "decoder.layers[{i}].mlp.layer_norm.bias", "stacked_transformer.layers[{i}].input_layernorm.weight": "decoder.layers[{i}].input_layernorm.weight", } for old_key, new_key in MODEL_LAYER_MAPPING.items(): try: old_attr = get_nested_attr(original_model, old_key) # Get tensor from original model new_attr = get_nested_attr(timesfm_model, new_key) # Get corresponding attribute in new model new_attr.data.copy_(old_attr.data) # Copy data except AttributeError: print(f"Skipping {old_key} (not found in original model).") num_layers = len(timesfm_model.decoder.layers) for i in range(num_layers): for old_template, new_template in TRANSFORMER_LAYER_MAPPING.items(): old_key = old_template.format(i=i) new_key = new_template.format(i=i) try: # Get tensor from original model old_attr = get_nested_attr(original_model, old_key) if "qkv_proj" in old_key: # Split the tensor into q, k, v projections q_proj, k_proj, v_proj = ( old_attr[: tfm.hparams.model_dims, ...], old_attr[tfm.hparams.model_dims : tfm.hparams.model_dims * 2, ...], old_attr[tfm.hparams.model_dims * 2 :, ...], ) # Get corresponding attribute in new model q_key = new_key.replace("qkv_proj", "q_proj") q_attr = get_nested_attr(timesfm_model, q_key) q_attr.data.copy_(q_proj.data) # Copy data k_key = new_key.replace("qkv_proj", "k_proj") k_attr = get_nested_attr(timesfm_model, k_key) k_attr.data.copy_(k_proj.data) # Copy data v_key = new_key.replace("qkv_proj", "v_proj") v_attr = get_nested_attr(timesfm_model, v_key) v_attr.data.copy_(v_proj.data) # Copy data else: # Get corresponding attribute in new model new_attr = get_nested_attr(timesfm_model, new_key) new_attr.data.copy_(old_attr.data) # Copy data except AttributeError: print(f"Skipping {old_key} (not found in original model).") timesfm_model.save_pretrained(model_path, safe_serialization=safe_serialization) shutil.rmtree(tmp_model_path) def check_outputs(model_path, huggingface_repo_id): """Compares outputs between original and converted models.""" print("\nChecking model outputs...") # Load original model tfm = timesfm.TimesFm( hparams=timesfm.TimesFmHparams( backend="cuda" if torch.cuda.is_available() else "cpu", per_core_batch_size=32, horizon_len=128, input_patch_len=32, output_patch_len=128, num_layers=50, context_len=2048, model_dims=1280, use_positional_embedding=False, point_forecast_mode="mean", ), checkpoint=timesfm.TimesFmCheckpoint(huggingface_repo_id=huggingface_repo_id), ) # Load converted model converted_model = TimesFmModelForPrediction.from_pretrained( model_path, dtype=torch.bfloat16, attn_implementation="sdpa", ).to("cuda" if torch.cuda.is_available() else "cpu") converted_model.eval() # Set to evaluation mode # Create test inputs forecast_input = [ np.sin(np.linspace(0, 20, 100)), np.sin(np.linspace(0, 20, 200)), np.sin(np.linspace(0, 20, 400)), ] frequency_input = [0, 1, 2] # Get predictions from original model point_forecast_orig, quantile_forecast_orig = tfm.forecast( forecast_input, freq=frequency_input, ) # Convert inputs to sequence of tensors forecast_input_tensor = [ torch.tensor(ts, dtype=torch.bfloat16).to("cuda" if torch.cuda.is_available() else "cpu") for ts in forecast_input ] frequency_input_tensor = torch.tensor(frequency_input, dtype=torch.long).to( "cuda" if torch.cuda.is_available() else "cpu" ) # Get predictions from converted model with torch.no_grad(): outputs = converted_model(past_values=forecast_input_tensor, freq=frequency_input_tensor, return_dict=True) point_forecast_conv = outputs.mean_predictions.float().cpu().numpy() quantile_forecast_conv = outputs.full_predictions.float().cpu().numpy() # Compare outputs point_forecast_diff = np.abs(point_forecast_orig - point_forecast_conv) quantile_forecast_diff = np.abs(quantile_forecast_orig - quantile_forecast_conv) max_point_diff = point_forecast_diff.max() mean_point_diff = point_forecast_diff.mean() max_quantile_diff = quantile_forecast_diff.max() mean_quantile_diff = quantile_forecast_diff.mean() print("\nOutput comparison:") print(f"Point forecast - Max difference: {max_point_diff:.6f}") print(f"Point forecast - Mean difference: {mean_point_diff:.6f}") print(f"Quantile forecast - Max difference: {max_quantile_diff:.6f}") print(f"Quantile forecast - Mean difference: {mean_quantile_diff:.6f}") # Define acceptable thresholds POINT_THRESHOLD = 1e-5 QUANTILE_THRESHOLD = 1e-5 if max_point_diff > POINT_THRESHOLD or max_quantile_diff > QUANTILE_THRESHOLD: raise ValueError( f"Output mismatch detected!\n" f"Point forecast max diff: {max_point_diff} (threshold: {POINT_THRESHOLD})\n" f"Quantile forecast max diff: {max_quantile_diff} (threshold: {QUANTILE_THRESHOLD})" ) print("\n✓ All outputs match within acceptable tolerance!") # Optional: Print shapes for verification print("\nOutput shapes:") print(f"Original point forecast: {point_forecast_orig.shape}") print(f"Converted point forecast: {point_forecast_conv.shape}") print(f"Original quantile forecast: {quantile_forecast_orig.shape}") print(f"Converted quantile forecast: {quantile_forecast_conv.shape}") def main(): parser = argparse.ArgumentParser() parser.add_argument( "--output_dir", required=True, help="Location to write HF model and tokenizer", ) parser.add_argument( "--safe_serialization", type=bool, default=True, help="Whether or not to save using `safetensors`." ) parser.add_argument( "--huggingface_repo_id", type=str, default="google/timesfm-2.0-500m-pytorch", help="The Hugging Face repository ID to use for the model.", ) args = parser.parse_args() # if the saved model file exists, skip the conversion if os.path.exists( os.path.join(args.output_dir, "model.safetensors" if args.safe_serialization else "pytorch_model.bin") ): print(f"Model already exists in {args.output_dir}, skipping conversion.") else: write_model( model_path=args.output_dir, safe_serialization=args.safe_serialization, huggingface_repo_id=args.huggingface_repo_id, ) check_outputs(args.output_dir, args.huggingface_repo_id) if __name__ == "__main__": main()
transformers/src/transformers/models/timesfm/convert_timesfm_orignal_to_hf.py/0
{ "file_path": "transformers/src/transformers/models/timesfm/convert_timesfm_orignal_to_hf.py", "repo_id": "transformers", "token_count": 5512 }
549
# coding=utf-8 # Copyright 2021 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Convert TrOCR checkpoints from the unilm repository.""" import argparse from pathlib import Path import requests import torch from PIL import Image from transformers import ( RobertaTokenizer, TrOCRConfig, TrOCRForCausalLM, TrOCRProcessor, VisionEncoderDecoderModel, ViTConfig, ViTImageProcessor, ViTModel, ) from transformers.utils import logging logging.set_verbosity_info() logger = logging.get_logger(__name__) # here we list all keys to be renamed (original name on the left, our name on the right) def create_rename_keys(encoder_config, decoder_config): rename_keys = [] for i in range(encoder_config.num_hidden_layers): # encoder layers: output projection, 2 feedforward neural networks and 2 layernorms rename_keys.append( (f"encoder.deit.blocks.{i}.norm1.weight", f"encoder.encoder.layer.{i}.layernorm_before.weight") ) rename_keys.append((f"encoder.deit.blocks.{i}.norm1.bias", f"encoder.encoder.layer.{i}.layernorm_before.bias")) rename_keys.append( (f"encoder.deit.blocks.{i}.attn.proj.weight", f"encoder.encoder.layer.{i}.attention.output.dense.weight") ) rename_keys.append( (f"encoder.deit.blocks.{i}.attn.proj.bias", f"encoder.encoder.layer.{i}.attention.output.dense.bias") ) rename_keys.append( (f"encoder.deit.blocks.{i}.norm2.weight", f"encoder.encoder.layer.{i}.layernorm_after.weight") ) rename_keys.append((f"encoder.deit.blocks.{i}.norm2.bias", f"encoder.encoder.layer.{i}.layernorm_after.bias")) rename_keys.append( (f"encoder.deit.blocks.{i}.mlp.fc1.weight", f"encoder.encoder.layer.{i}.intermediate.dense.weight") ) rename_keys.append( (f"encoder.deit.blocks.{i}.mlp.fc1.bias", f"encoder.encoder.layer.{i}.intermediate.dense.bias") ) rename_keys.append( (f"encoder.deit.blocks.{i}.mlp.fc2.weight", f"encoder.encoder.layer.{i}.output.dense.weight") ) rename_keys.append((f"encoder.deit.blocks.{i}.mlp.fc2.bias", f"encoder.encoder.layer.{i}.output.dense.bias")) # cls token, position embeddings and patch embeddings of encoder rename_keys.extend( [ ("encoder.deit.cls_token", "encoder.embeddings.cls_token"), ("encoder.deit.pos_embed", "encoder.embeddings.position_embeddings"), ("encoder.deit.patch_embed.proj.weight", "encoder.embeddings.patch_embeddings.projection.weight"), ("encoder.deit.patch_embed.proj.bias", "encoder.embeddings.patch_embeddings.projection.bias"), ("encoder.deit.norm.weight", "encoder.layernorm.weight"), ("encoder.deit.norm.bias", "encoder.layernorm.bias"), ] ) return rename_keys # we split up the matrix of each encoder layer into queries, keys and values def read_in_q_k_v(state_dict, encoder_config): for i in range(encoder_config.num_hidden_layers): # queries, keys and values (only weights, no biases) in_proj_weight = state_dict.pop(f"encoder.deit.blocks.{i}.attn.qkv.weight") state_dict[f"encoder.encoder.layer.{i}.attention.attention.query.weight"] = in_proj_weight[ : encoder_config.hidden_size, : ] state_dict[f"encoder.encoder.layer.{i}.attention.attention.key.weight"] = in_proj_weight[ encoder_config.hidden_size : encoder_config.hidden_size * 2, : ] state_dict[f"encoder.encoder.layer.{i}.attention.attention.value.weight"] = in_proj_weight[ -encoder_config.hidden_size :, : ] def rename_key(dct, old, new): val = dct.pop(old) dct[new] = val # We will verify our results on an image of the IAM Handwriting Database def prepare_img(checkpoint_url): if "handwritten" in checkpoint_url: url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02-00.jpg" # industry # url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02-12.jpg" # have # url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02-10.jpg" # let # url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg" # # url = "https://fki.tic.heia-fr.ch/static/img/a01-122.jpg" elif "printed" in checkpoint_url or "stage1" in checkpoint_url: url = "https://www.researchgate.net/profile/Dinh-Sang/publication/338099565/figure/fig8/AS:840413229350922@1577381536857/An-receipt-example-in-the-SROIE-2019-dataset_Q640.jpg" im = Image.open(requests.get(url, stream=True).raw).convert("RGB") return im @torch.no_grad() def convert_tr_ocr_checkpoint(checkpoint_url, pytorch_dump_folder_path): """ Copy/paste/tweak model's weights to our VisionEncoderDecoderModel structure. """ # define encoder and decoder configs based on checkpoint_url encoder_config = ViTConfig(image_size=384, qkv_bias=False) decoder_config = TrOCRConfig() # size of the architecture if "base" in checkpoint_url: decoder_config.encoder_hidden_size = 768 elif "large" in checkpoint_url: # use ViT-large encoder encoder_config.hidden_size = 1024 encoder_config.intermediate_size = 4096 encoder_config.num_hidden_layers = 24 encoder_config.num_attention_heads = 16 decoder_config.encoder_hidden_size = 1024 else: raise ValueError("Should either find 'base' or 'large' in checkpoint URL") # the large-printed + stage1 checkpoints uses sinusoidal position embeddings, no layernorm afterwards if "large-printed" in checkpoint_url or "stage1" in checkpoint_url: decoder_config.tie_word_embeddings = False decoder_config.activation_function = "relu" decoder_config.max_position_embeddings = 1024 decoder_config.scale_embedding = True decoder_config.use_learned_position_embeddings = False decoder_config.layernorm_embedding = False # load HuggingFace model encoder = ViTModel(encoder_config, add_pooling_layer=False) decoder = TrOCRForCausalLM(decoder_config) model = VisionEncoderDecoderModel(encoder=encoder, decoder=decoder) model.eval() # load state_dict of original model, rename some keys state_dict = torch.hub.load_state_dict_from_url(checkpoint_url, map_location="cpu", check_hash=True)["model"] rename_keys = create_rename_keys(encoder_config, decoder_config) for src, dest in rename_keys: rename_key(state_dict, src, dest) read_in_q_k_v(state_dict, encoder_config) # remove parameters we don't need del state_dict["encoder.deit.head.weight"] del state_dict["encoder.deit.head.bias"] del state_dict["decoder.version"] # add prefix to decoder keys for key, val in state_dict.copy().items(): val = state_dict.pop(key) if key.startswith("decoder") and "output_projection" not in key: state_dict["decoder.model." + key] = val else: state_dict[key] = val # load state dict model.load_state_dict(state_dict) # Check outputs on an image image_processor = ViTImageProcessor(size=encoder_config.image_size) tokenizer = RobertaTokenizer.from_pretrained("FacebookAI/roberta-large") processor = TrOCRProcessor(image_processor, tokenizer) pixel_values = processor(images=prepare_img(checkpoint_url), return_tensors="pt").pixel_values # verify logits decoder_input_ids = torch.tensor([[model.config.decoder.decoder_start_token_id]]) outputs = model(pixel_values=pixel_values, decoder_input_ids=decoder_input_ids) logits = outputs.logits expected_shape = torch.Size([1, 1, 50265]) if "trocr-base-handwritten" in checkpoint_url: expected_slice = torch.tensor( [-1.4502, -4.6683, -0.5347, -2.9291, 9.1435, -3.0571, 8.9764, 1.7560, 8.7358, -1.5311] ) elif "trocr-large-handwritten" in checkpoint_url: expected_slice = torch.tensor( [-2.6437, -1.3129, -2.2596, -5.3455, 6.3539, 1.7604, 5.4991, 1.4702, 5.6113, 2.0170] ) elif "trocr-base-printed" in checkpoint_url: expected_slice = torch.tensor( [-5.6816, -5.8388, 1.1398, -6.9034, 6.8505, -2.4393, 1.2284, -1.0232, -1.9661, -3.9210] ) elif "trocr-large-printed" in checkpoint_url: expected_slice = torch.tensor( [-6.0162, -7.0959, 4.4155, -5.1063, 7.0468, -3.1631, 2.6466, -0.3081, -0.8106, -1.7535] ) if "stage1" not in checkpoint_url: assert logits.shape == expected_shape, "Shape of logits not as expected" assert torch.allclose(logits[0, 0, :10], expected_slice, atol=1e-3), "First elements of logits not as expected" Path(pytorch_dump_folder_path).mkdir(exist_ok=True) print(f"Saving model to {pytorch_dump_folder_path}") model.save_pretrained(pytorch_dump_folder_path) print(f"Saving processor to {pytorch_dump_folder_path}") processor.save_pretrained(pytorch_dump_folder_path) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--checkpoint_url", default="https://layoutlm.blob.core.windows.net/trocr/model_zoo/fairseq/trocr-base-handwritten.pt", type=str, help="URL to the original PyTorch checkpoint (.pth file).", ) parser.add_argument( "--pytorch_dump_folder_path", default=None, type=str, help="Path to the folder to output PyTorch model." ) args = parser.parse_args() convert_tr_ocr_checkpoint(args.checkpoint_url, args.pytorch_dump_folder_path)
transformers/src/transformers/models/trocr/convert_trocr_unilm_to_pytorch.py/0
{ "file_path": "transformers/src/transformers/models/trocr/convert_trocr_unilm_to_pytorch.py", "repo_id": "transformers", "token_count": 4298 }
550
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """UnivNetModel model configuration""" from ...configuration_utils import PretrainedConfig from ...utils import logging logger = logging.get_logger(__name__) class UnivNetConfig(PretrainedConfig): r""" This is the configuration class to store the configuration of a [`UnivNetModel`]. It is used to instantiate a UnivNet vocoder model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the UnivNet [dg845/univnet-dev](https://huggingface.co/dg845/univnet-dev) architecture, which corresponds to the 'c32' architecture in [maum-ai/univnet](https://github.com/maum-ai/univnet/blob/master/config/default_c32.yaml). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: model_in_channels (`int`, *optional*, defaults to 64): The number of input channels for the UnivNet residual network. This should correspond to `noise_sequence.shape[1]` and the value used in the [`UnivNetFeatureExtractor`] class. model_hidden_channels (`int`, *optional*, defaults to 32): The number of hidden channels of each residual block in the UnivNet residual network. num_mel_bins (`int`, *optional*, defaults to 100): The number of frequency bins in the conditioning log-mel spectrogram. This should correspond to the value used in the [`UnivNetFeatureExtractor`] class. resblock_kernel_sizes (`tuple[int]` or `list[int]`, *optional*, defaults to `[3, 3, 3]`): A tuple of integers defining the kernel sizes of the 1D convolutional layers in the UnivNet residual network. The length of `resblock_kernel_sizes` defines the number of resnet blocks and should match that of `resblock_stride_sizes` and `resblock_dilation_sizes`. resblock_stride_sizes (`tuple[int]` or `list[int]`, *optional*, defaults to `[8, 8, 4]`): A tuple of integers defining the stride sizes of the 1D convolutional layers in the UnivNet residual network. The length of `resblock_stride_sizes` should match that of `resblock_kernel_sizes` and `resblock_dilation_sizes`. resblock_dilation_sizes (`tuple[tuple[int]]` or `list[list[int]]`, *optional*, defaults to `[[1, 3, 9, 27], [1, 3, 9, 27], [1, 3, 9, 27]]`): A nested tuple of integers defining the dilation rates of the dilated 1D convolutional layers in the UnivNet residual network. The length of `resblock_dilation_sizes` should match that of `resblock_kernel_sizes` and `resblock_stride_sizes`. The length of each nested list in `resblock_dilation_sizes` defines the number of convolutional layers per resnet block. kernel_predictor_num_blocks (`int`, *optional*, defaults to 3): The number of residual blocks in the kernel predictor network, which calculates the kernel and bias for each location variable convolution layer in the UnivNet residual network. kernel_predictor_hidden_channels (`int`, *optional*, defaults to 64): The number of hidden channels for each residual block in the kernel predictor network. kernel_predictor_conv_size (`int`, *optional*, defaults to 3): The kernel size of each 1D convolutional layer in the kernel predictor network. kernel_predictor_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for each residual block in the kernel predictor network. initializer_range (`float`, *optional*, defaults to 0.01): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. leaky_relu_slope (`float`, *optional*, defaults to 0.2): The angle of the negative slope used by the leaky ReLU activation. Example: ```python >>> from transformers import UnivNetModel, UnivNetConfig >>> # Initializing a Tortoise TTS style configuration >>> configuration = UnivNetConfig() >>> # Initializing a model (with random weights) from the Tortoise TTS style configuration >>> model = UnivNetModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ``` """ model_type = "univnet" def __init__( self, model_in_channels=64, model_hidden_channels=32, num_mel_bins=100, resblock_kernel_sizes=[3, 3, 3], resblock_stride_sizes=[8, 8, 4], resblock_dilation_sizes=[[1, 3, 9, 27], [1, 3, 9, 27], [1, 3, 9, 27]], kernel_predictor_num_blocks=3, kernel_predictor_hidden_channels=64, kernel_predictor_conv_size=3, kernel_predictor_dropout=0.0, initializer_range=0.01, leaky_relu_slope=0.2, **kwargs, ): if not (len(resblock_kernel_sizes) == len(resblock_stride_sizes) == len(resblock_dilation_sizes)): raise ValueError( "`resblock_kernel_sizes`, `resblock_stride_sizes`, and `resblock_dilation_sizes` must all have the" " same length (which will be the number of resnet blocks in the model)." ) self.model_in_channels = model_in_channels self.model_hidden_channels = model_hidden_channels self.num_mel_bins = num_mel_bins self.resblock_kernel_sizes = resblock_kernel_sizes self.resblock_stride_sizes = resblock_stride_sizes self.resblock_dilation_sizes = resblock_dilation_sizes self.kernel_predictor_num_blocks = kernel_predictor_num_blocks self.kernel_predictor_hidden_channels = kernel_predictor_hidden_channels self.kernel_predictor_conv_size = kernel_predictor_conv_size self.kernel_predictor_dropout = kernel_predictor_dropout self.initializer_range = initializer_range self.leaky_relu_slope = leaky_relu_slope super().__init__(**kwargs) __all__ = ["UnivNetConfig"]
transformers/src/transformers/models/univnet/configuration_univnet.py/0
{ "file_path": "transformers/src/transformers/models/univnet/configuration_univnet.py", "repo_id": "transformers", "token_count": 2469 }
551
# Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import torch from huggingface_hub import hf_hub_download from transformers import ( AddedToken, AutoConfig, AutoTokenizer, CLIPImageProcessor, LlavaProcessor, VipLlavaConfig, VipLlavaForConditionalGeneration, ) KEYS_TO_MODIFY_MAPPING = { "model.vision_tower.": "", "model.mm_projector": "multi_modal_projector", "model": "model.model", "vision_model.model": "vision_model", "lm_head": "language_model.lm_head", "model.model": "language_model.model", "multi_modal_projector.0": "multi_modal_projector.linear_1", "multi_modal_projector.2": "multi_modal_projector.linear_2", "final_linear.0": "linear_1", "final_linear.2": "linear_2", "multi_modal_projector.clip_layernorm": "multi_modal_projector.projector_layernorm", } # Copied from transformers.models.llava.convert_llava_weights_to_hf.convert_state_dict_to_hf def convert_state_dict_to_hf(state_dict): new_state_dict = {} for key, value in state_dict.items(): if key.endswith(".inv_freq"): continue for key_to_modify, new_key in KEYS_TO_MODIFY_MAPPING.items(): if key_to_modify in key: key = key.replace(key_to_modify, new_key) new_state_dict[key] = value return new_state_dict def convert_vipllava_llama_to_hf(text_model_id, vision_model_id, output_hub_path, old_state_dict_id): torch.set_default_dtype(torch.float16) text_config = AutoConfig.from_pretrained(text_model_id) tokenizer = AutoTokenizer.from_pretrained(text_model_id) tokenizer.add_tokens(AddedToken("<image>", special=True, normalized=False), special_tokens=True) tokenizer.add_special_tokens({"pad_token": "<pad>"}) image_processor = CLIPImageProcessor.from_pretrained(vision_model_id) processor = LlavaProcessor(tokenizer=tokenizer, image_processor=image_processor) config = VipLlavaConfig(text_config=text_config) config.pad_token_id = 32001 with torch.device("meta"): model = VipLlavaForConditionalGeneration(config) # Pad to 64 for performance reasons pad_shape = 64 state_dict_path = hf_hub_download(old_state_dict_id, "model_state_dict_7b.bin") state_dict = torch.load(state_dict_path, map_location="cpu", weights_only=True) state_dict = convert_state_dict_to_hf(state_dict) model.load_state_dict(state_dict, strict=True, assign=True) pre_expansion_embeddings = model.language_model.model.embed_tokens.weight.data mu = torch.mean(pre_expansion_embeddings, dim=0).float() n = pre_expansion_embeddings.size()[0] sigma = ((pre_expansion_embeddings - mu).T @ (pre_expansion_embeddings - mu)) / n dist = torch.distributions.multivariate_normal.MultivariateNormal(mu, covariance_matrix=1e-5 * sigma) # We add an image token so we resize the model model.resize_token_embeddings(config.text_config.vocab_size + 2, pad_shape) model.language_model.model.embed_tokens.weight.data[32000:] = torch.stack( tuple(dist.sample() for _ in range(model.language_model.model.embed_tokens.weight.data[32000:].shape[0])), dim=0, ) model.language_model.lm_head.weight.data[32000:] = torch.stack( tuple(dist.sample() for _ in range(model.language_model.lm_head.weight.data[32000:].shape[0])), dim=0, ) model.push_to_hub(output_hub_path) processor.push_to_hub(output_hub_path) def main(): parser = argparse.ArgumentParser() parser.add_argument( "--text_model_id", help="Hub location of the text model", ) parser.add_argument( "--vision_model_id", help="Hub location of the vision model", ) parser.add_argument( "--output_hub_path", help="Location on the hub of the converted model", ) parser.add_argument( "--old_state_dict_id", help="Location on the hub of the raw state dict of the original model. The filename needs to be `model_state_dict.bin`", ) args = parser.parse_args() convert_vipllava_llama_to_hf( args.text_model_id, args.vision_model_id, args.output_hub_path, args.old_state_dict_id ) if __name__ == "__main__": main()
transformers/src/transformers/models/vipllava/convert_vipllava_weights_to_hf.py/0
{ "file_path": "transformers/src/transformers/models/vipllava/convert_vipllava_weights_to_hf.py", "repo_id": "transformers", "token_count": 1893 }
552
# coding=utf-8 # Copyright 2021 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Convert VisualBert checkpoint.""" import argparse from collections import OrderedDict from pathlib import Path import torch from transformers import ( VisualBertConfig, VisualBertForMultipleChoice, VisualBertForPreTraining, VisualBertForQuestionAnswering, VisualBertForVisualReasoning, ) from transformers.utils import logging logging.set_verbosity_info() logger = logging.get_logger(__name__) rename_keys_prefix = [ ("bert.bert", "visual_bert"), ("bert.cls", "cls"), ("bert.classifier", "cls"), ("token_type_embeddings_visual", "visual_token_type_embeddings"), ("position_embeddings_visual", "visual_position_embeddings"), ("projection", "visual_projection"), ] ACCEPTABLE_CHECKPOINTS = [ "nlvr2_coco_pre_trained.th", "nlvr2_fine_tuned.th", "nlvr2_pre_trained.th", "vcr_coco_pre_train.th", "vcr_fine_tune.th", "vcr_pre_train.th", "vqa_coco_pre_trained.th", "vqa_fine_tuned.th", "vqa_pre_trained.th", ] def load_state_dict(checkpoint_path): sd = torch.load(checkpoint_path, map_location="cpu", weights_only=True) return sd def get_new_dict(d, config, rename_keys_prefix=rename_keys_prefix): new_d = OrderedDict() new_d["visual_bert.embeddings.position_ids"] = torch.arange(config.max_position_embeddings).expand((1, -1)) # detector_d = OrderedDict() for key in d: if "detector" in key: # detector_d[key.replace('detector.','')] = d[key] continue new_key = key for name_pair in rename_keys_prefix: new_key = new_key.replace(name_pair[0], name_pair[1]) new_d[new_key] = d[key] if key == "bert.cls.predictions.decoder.weight": # Old bert code didn't have `decoder.bias`, but was added separately new_d["cls.predictions.decoder.bias"] = new_d["cls.predictions.bias"] return new_d @torch.no_grad() def convert_visual_bert_checkpoint(checkpoint_path, pytorch_dump_folder_path): """ Copy/paste/tweak model's weights to our VisualBERT structure. """ assert checkpoint_path.split("/")[-1] in ACCEPTABLE_CHECKPOINTS, ( f"The checkpoint provided must be in {ACCEPTABLE_CHECKPOINTS}." ) # Get Config if "pre" in checkpoint_path: model_type = "pretraining" if "vcr" in checkpoint_path: config_params = {"visual_embedding_dim": 512} elif "vqa_advanced" in checkpoint_path: config_params = {"visual_embedding_dim": 2048} elif "vqa" in checkpoint_path: config_params = {"visual_embedding_dim": 2048} elif "nlvr" in checkpoint_path: config_params = {"visual_embedding_dim": 1024} else: raise NotImplementedError(f"No implementation found for `{checkpoint_path}`.") else: if "vcr" in checkpoint_path: config_params = {"visual_embedding_dim": 512} model_type = "multichoice" elif "vqa_advanced" in checkpoint_path: config_params = {"visual_embedding_dim": 2048} model_type = "vqa_advanced" elif "vqa" in checkpoint_path: config_params = {"visual_embedding_dim": 2048, "num_labels": 3129} model_type = "vqa" elif "nlvr" in checkpoint_path: config_params = { "visual_embedding_dim": 1024, "num_labels": 2, } model_type = "nlvr" config = VisualBertConfig(**config_params) # Load State Dict state_dict = load_state_dict(checkpoint_path) new_state_dict = get_new_dict(state_dict, config) if model_type == "pretraining": model = VisualBertForPreTraining(config) elif model_type == "vqa": model = VisualBertForQuestionAnswering(config) elif model_type == "nlvr": model = VisualBertForVisualReasoning(config) elif model_type == "multichoice": model = VisualBertForMultipleChoice(config) model.load_state_dict(new_state_dict) # Save Checkpoints Path(pytorch_dump_folder_path).mkdir(exist_ok=True) model.save_pretrained(pytorch_dump_folder_path) if __name__ == "__main__": parser = argparse.ArgumentParser() # Required parameters parser.add_argument("orig_checkpoint_path", type=str, help="A path to .th on local filesystem.") parser.add_argument("pytorch_dump_folder_path", type=str, help="Path to the output PyTorch model.") args = parser.parse_args() convert_visual_bert_checkpoint(args.orig_checkpoint_path, args.pytorch_dump_folder_path)
transformers/src/transformers/models/visual_bert/convert_visual_bert_original_pytorch_checkpoint_to_pytorch.py/0
{ "file_path": "transformers/src/transformers/models/visual_bert/convert_visual_bert_original_pytorch_checkpoint_to_pytorch.py", "repo_id": "transformers", "token_count": 2122 }
553
# coding=utf-8 # Copyright 2022 Facebook AI and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """PyTorch ViT MAE (masked autoencoder) model.""" import collections.abc from copy import deepcopy from dataclasses import dataclass from typing import Callable, Optional, Union import numpy as np import torch import torch.utils.checkpoint from torch import nn from ...activations import ACT2FN from ...modeling_layers import GradientCheckpointingLayer from ...modeling_outputs import BaseModelOutput from ...modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel from ...pytorch_utils import find_pruneable_heads_and_indices, prune_linear_layer from ...utils import ModelOutput, auto_docstring, logging, torch_int from .configuration_vit_mae import ViTMAEConfig logger = logging.get_logger(__name__) @dataclass @auto_docstring( custom_intro=""" Class for ViTMAEModel's outputs, with potential hidden states and attentions. """ ) class ViTMAEModelOutput(ModelOutput): r""" mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`): Tensor indicating which patches are masked (1) and which are not (0). ids_restore (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Tensor containing the original index of the (shuffled) masked patches. """ last_hidden_state: Optional[torch.FloatTensor] = None mask: Optional[torch.LongTensor] = None ids_restore: Optional[torch.LongTensor] = None hidden_states: Optional[tuple[torch.FloatTensor]] = None attentions: Optional[tuple[torch.FloatTensor]] = None @dataclass @auto_docstring( custom_intro=""" Class for ViTMAEDecoder's outputs, with potential hidden states and attentions. """ ) class ViTMAEDecoderOutput(ModelOutput): r""" logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, patch_size ** 2 * num_channels)`): Pixel reconstruction logits. """ logits: Optional[torch.FloatTensor] = None hidden_states: Optional[tuple[torch.FloatTensor]] = None attentions: Optional[tuple[torch.FloatTensor]] = None @dataclass @auto_docstring( custom_intro=""" Class for ViTMAEForPreTraining's outputs, with potential hidden states and attentions. """ ) class ViTMAEForPreTrainingOutput(ModelOutput): r""" loss (`torch.FloatTensor` of shape `(1,)`): Pixel reconstruction loss. logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, patch_size ** 2 * num_channels)`): Pixel reconstruction logits. mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`): Tensor indicating which patches are masked (1) and which are not (0). ids_restore (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Tensor containing the original index of the (shuffled) masked patches. """ loss: Optional[torch.FloatTensor] = None logits: Optional[torch.FloatTensor] = None mask: Optional[torch.LongTensor] = None ids_restore: Optional[torch.LongTensor] = None hidden_states: Optional[tuple[torch.FloatTensor]] = None attentions: Optional[tuple[torch.FloatTensor]] = None def get_2d_sincos_pos_embed(embed_dim, grid_size, add_cls_token=False): """ Create 2D sin/cos positional embeddings. Args: embed_dim (`int`): Embedding dimension. grid_size (`int`): The grid height and width. add_cls_token (`bool`, *optional*, defaults to `False`): Whether or not to add a classification (CLS) token. Returns: (`torch.FloatTensor` of shape (grid_size*grid_size, embed_dim) or (1+grid_size*grid_size, embed_dim): the position embeddings (with or without classification token) """ grid_h = np.arange(grid_size, dtype=np.float32) grid_w = np.arange(grid_size, dtype=np.float32) grid = np.meshgrid(grid_w, grid_h) # here w goes first grid = np.stack(grid, axis=0) grid = grid.reshape([2, 1, grid_size, grid_size]) pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid) if add_cls_token: pos_embed = np.concatenate([np.zeros([1, embed_dim]), pos_embed], axis=0) return pos_embed def get_2d_sincos_pos_embed_from_grid(embed_dim, grid): if embed_dim % 2 != 0: raise ValueError("embed_dim must be even") # use half of dimensions to encode grid_h emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0]) # (H*W, D/2) emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1]) # (H*W, D/2) emb = np.concatenate([emb_h, emb_w], axis=1) # (H*W, D) return emb def get_1d_sincos_pos_embed_from_grid(embed_dim, pos): """ embed_dim: output dimension for each position pos: a list of positions to be encoded: size (M,) out: (M, D) """ if embed_dim % 2 != 0: raise ValueError("embed_dim must be even") omega = np.arange(embed_dim // 2, dtype=float) omega /= embed_dim / 2.0 omega = 1.0 / 10000**omega # (D/2,) pos = pos.reshape(-1) # (M,) out = np.einsum("m,d->md", pos, omega) # (M, D/2), outer product emb_sin = np.sin(out) # (M, D/2) emb_cos = np.cos(out) # (M, D/2) emb = np.concatenate([emb_sin, emb_cos], axis=1) # (M, D) return emb class ViTMAEEmbeddings(nn.Module): """ Construct the CLS token, position and patch embeddings. """ def __init__(self, config): super().__init__() self.cls_token = nn.Parameter(torch.zeros(1, 1, config.hidden_size)) self.patch_embeddings = ViTMAEPatchEmbeddings(config) self.num_patches = self.patch_embeddings.num_patches # fixed sin-cos embedding self.position_embeddings = nn.Parameter( torch.zeros(1, self.num_patches + 1, config.hidden_size), requires_grad=False ) self.patch_size = config.patch_size self.config = config def initialize_weights(self): # initialize (and freeze) position embeddings by sin-cos embedding pos_embed = get_2d_sincos_pos_embed( self.position_embeddings.shape[-1], int(self.patch_embeddings.num_patches**0.5), add_cls_token=True ) self.position_embeddings.data.copy_(torch.from_numpy(pos_embed).float().unsqueeze(0)) # initialize patch_embeddings like nn.Linear (instead of nn.Conv2d) w = self.patch_embeddings.projection.weight.data torch.nn.init.xavier_uniform_(w.view([w.shape[0], -1])) # timm's trunc_normal_(std=.02) is effectively normal_(std=0.02) as cutoff is too big (2.) torch.nn.init.normal_(self.cls_token, std=self.config.initializer_range) # Copied from transformers.models.vit.modeling_vit.ViTEmbeddings.interpolate_pos_encoding def interpolate_pos_encoding(self, embeddings: torch.Tensor, height: int, width: int) -> torch.Tensor: """ This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution images. This method is also adapted to support torch.jit tracing. Adapted from: - https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac952ab558447af1fa1365362a/vision_transformer.py#L174-L194, and - https://github.com/facebookresearch/dinov2/blob/e1277af2ba9496fbadf7aec6eba56e8d882d1e35/dinov2/models/vision_transformer.py#L179-L211 """ num_patches = embeddings.shape[1] - 1 num_positions = self.position_embeddings.shape[1] - 1 # always interpolate when tracing to ensure the exported model works for dynamic input shapes if not torch.jit.is_tracing() and num_patches == num_positions and height == width: return self.position_embeddings class_pos_embed = self.position_embeddings[:, :1] patch_pos_embed = self.position_embeddings[:, 1:] dim = embeddings.shape[-1] new_height = height // self.patch_size new_width = width // self.patch_size sqrt_num_positions = torch_int(num_positions**0.5) patch_pos_embed = patch_pos_embed.reshape(1, sqrt_num_positions, sqrt_num_positions, dim) patch_pos_embed = patch_pos_embed.permute(0, 3, 1, 2) patch_pos_embed = nn.functional.interpolate( patch_pos_embed, size=(new_height, new_width), mode="bicubic", align_corners=False, ) patch_pos_embed = patch_pos_embed.permute(0, 2, 3, 1).view(1, -1, dim) return torch.cat((class_pos_embed, patch_pos_embed), dim=1) def random_masking(self, sequence, noise=None): """ Perform per-sample random masking by per-sample shuffling. Per-sample shuffling is done by argsort random noise. Args: sequence (`torch.LongTensor` of shape `(batch_size, sequence_length, dim)`) noise (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*) which is mainly used for testing purposes to control randomness and maintain the reproducibility """ batch_size, seq_length, dim = sequence.shape len_keep = int(seq_length * (1 - self.config.mask_ratio)) if noise is None: noise = torch.rand(batch_size, seq_length, device=sequence.device) # noise in [0, 1] # sort noise for each sample ids_shuffle = torch.argsort(noise, dim=1).to(sequence.device) # ascend: small is keep, large is remove ids_restore = torch.argsort(ids_shuffle, dim=1).to(sequence.device) # keep the first subset ids_keep = ids_shuffle[:, :len_keep] sequence_unmasked = torch.gather(sequence, dim=1, index=ids_keep.unsqueeze(-1).repeat(1, 1, dim)) # generate the binary mask: 0 is keep, 1 is remove mask = torch.ones([batch_size, seq_length], device=sequence.device) mask[:, :len_keep] = 0 # unshuffle to get the binary mask mask = torch.gather(mask, dim=1, index=ids_restore) return sequence_unmasked, mask, ids_restore def forward(self, pixel_values, noise=None, interpolate_pos_encoding: bool = False): batch_size, num_channels, height, width = pixel_values.shape embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding) if interpolate_pos_encoding: position_embeddings = self.interpolate_pos_encoding(embeddings, height, width) else: position_embeddings = self.position_embeddings # add position embeddings w/o cls token embeddings = embeddings + position_embeddings[:, 1:, :] # masking: length -> length * config.mask_ratio embeddings, mask, ids_restore = self.random_masking(embeddings, noise) # append cls token cls_token = self.cls_token + position_embeddings[:, :1, :] cls_tokens = cls_token.expand(embeddings.shape[0], -1, -1) embeddings = torch.cat((cls_tokens, embeddings), dim=1) return embeddings, mask, ids_restore class ViTMAEPatchEmbeddings(nn.Module): """ This class turns `pixel_values` of shape `(batch_size, num_channels, height, width)` into the initial `hidden_states` (patch embeddings) of shape `(batch_size, seq_length, hidden_size)` to be consumed by a Transformer. """ def __init__(self, config): super().__init__() image_size, patch_size = config.image_size, config.patch_size num_channels, hidden_size = config.num_channels, config.hidden_size image_size = image_size if isinstance(image_size, collections.abc.Iterable) else (image_size, image_size) patch_size = patch_size if isinstance(patch_size, collections.abc.Iterable) else (patch_size, patch_size) num_patches = (image_size[1] // patch_size[1]) * (image_size[0] // patch_size[0]) self.image_size = image_size self.patch_size = patch_size self.num_channels = num_channels self.num_patches = num_patches self.projection = nn.Conv2d(num_channels, hidden_size, kernel_size=patch_size, stride=patch_size) def forward(self, pixel_values, interpolate_pos_encoding: bool = False): batch_size, num_channels, height, width = pixel_values.shape if num_channels != self.num_channels: raise ValueError( "Make sure that the channel dimension of the pixel values match with the one set in the configuration." ) if not interpolate_pos_encoding and (height != self.image_size[0] or width != self.image_size[1]): raise ValueError( f"Input image size ({height}*{width}) doesn't match model ({self.image_size[0]}*{self.image_size[1]})." ) x = self.projection(pixel_values).flatten(2).transpose(1, 2) return x # Copied from transformers.models.vit.modeling_vit.eager_attention_forward def eager_attention_forward( module: nn.Module, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor, attention_mask: Optional[torch.Tensor], scaling: float, dropout: float = 0.0, **kwargs, ): # Take the dot product between "query" and "key" to get the raw attention scores. attn_weights = torch.matmul(query, key.transpose(-1, -2)) * scaling # Normalize the attention scores to probabilities. attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype) # This is actually dropping out entire tokens to attend to, which might # seem a bit unusual, but is taken from the original Transformer paper. attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training) # Mask heads if we want to if attention_mask is not None: attn_weights = attn_weights * attention_mask attn_output = torch.matmul(attn_weights, value) attn_output = attn_output.transpose(1, 2).contiguous() return attn_output, attn_weights # Copied from transformers.models.vit.modeling_vit.ViTSelfAttention ViT->ViTMAE class ViTMAESelfAttention(nn.Module): def __init__(self, config: ViTMAEConfig) -> None: super().__init__() if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): raise ValueError( f"The hidden size {config.hidden_size} is not a multiple of the number of attention " f"heads {config.num_attention_heads}." ) self.config = config self.num_attention_heads = config.num_attention_heads self.attention_head_size = int(config.hidden_size / config.num_attention_heads) self.all_head_size = self.num_attention_heads * self.attention_head_size self.dropout_prob = config.attention_probs_dropout_prob self.scaling = self.attention_head_size**-0.5 self.is_causal = False self.query = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias) def forward( self, hidden_states, head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> Union[tuple[torch.Tensor, torch.Tensor], tuple[torch.Tensor]]: batch_size, seq_length, _ = hidden_states.shape key_layer = ( self.key(hidden_states) .view(batch_size, -1, self.num_attention_heads, self.attention_head_size) .transpose(1, 2) ) value_layer = ( self.value(hidden_states) .view(batch_size, -1, self.num_attention_heads, self.attention_head_size) .transpose(1, 2) ) query_layer = ( self.query(hidden_states) .view(batch_size, -1, self.num_attention_heads, self.attention_head_size) .transpose(1, 2) ) attention_interface: Callable = eager_attention_forward if self.config._attn_implementation != "eager": if self.config._attn_implementation == "sdpa" and output_attentions: logger.warning_once( "`torch.nn.functional.scaled_dot_product_attention` does not support `output_attentions=True`. Falling back to " 'eager attention. This warning can be removed using the argument `attn_implementation="eager"` when loading the model.' ) else: attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation] context_layer, attention_probs = attention_interface( self, query_layer, key_layer, value_layer, head_mask, is_causal=self.is_causal, scaling=self.scaling, dropout=0.0 if not self.training else self.dropout_prob, ) new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) context_layer = context_layer.reshape(new_context_layer_shape) outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) return outputs # Copied from transformers.models.vit.modeling_vit.ViTSelfOutput with ViT->ViTMAE class ViTMAESelfOutput(nn.Module): """ The residual connection is defined in ViTMAELayer instead of here (as is the case with other models), due to the layernorm applied before each block. """ def __init__(self, config: ViTMAEConfig) -> None: super().__init__() self.dense = nn.Linear(config.hidden_size, config.hidden_size) self.dropout = nn.Dropout(config.hidden_dropout_prob) def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor: hidden_states = self.dense(hidden_states) hidden_states = self.dropout(hidden_states) return hidden_states # Copied from transformers.models.vit.modeling_vit.ViTAttention with ViT->ViTMAE class ViTMAEAttention(nn.Module): def __init__(self, config: ViTMAEConfig) -> None: super().__init__() self.attention = ViTMAESelfAttention(config) self.output = ViTMAESelfOutput(config) self.pruned_heads = set() def prune_heads(self, heads: set[int]) -> None: if len(heads) == 0: return heads, index = find_pruneable_heads_and_indices( heads, self.attention.num_attention_heads, self.attention.attention_head_size, self.pruned_heads ) # Prune linear layers self.attention.query = prune_linear_layer(self.attention.query, index) self.attention.key = prune_linear_layer(self.attention.key, index) self.attention.value = prune_linear_layer(self.attention.value, index) self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) # Update hyper params and store pruned heads self.attention.num_attention_heads = self.attention.num_attention_heads - len(heads) self.attention.all_head_size = self.attention.attention_head_size * self.attention.num_attention_heads self.pruned_heads = self.pruned_heads.union(heads) def forward( self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> Union[tuple[torch.Tensor, torch.Tensor], tuple[torch.Tensor]]: self_outputs = self.attention(hidden_states, head_mask, output_attentions) attention_output = self.output(self_outputs[0], hidden_states) outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them return outputs # Copied from transformers.models.vit.modeling_vit.ViTIntermediate ViT->ViTMAE class ViTMAEIntermediate(nn.Module): def __init__(self, config: ViTMAEConfig) -> None: super().__init__() self.dense = nn.Linear(config.hidden_size, config.intermediate_size) if isinstance(config.hidden_act, str): self.intermediate_act_fn = ACT2FN[config.hidden_act] else: self.intermediate_act_fn = config.hidden_act def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: hidden_states = self.dense(hidden_states) hidden_states = self.intermediate_act_fn(hidden_states) return hidden_states # Copied from transformers.models.vit.modeling_vit.ViTOutput ViT->ViTMAE class ViTMAEOutput(nn.Module): def __init__(self, config: ViTMAEConfig) -> None: super().__init__() self.dense = nn.Linear(config.intermediate_size, config.hidden_size) self.dropout = nn.Dropout(config.hidden_dropout_prob) def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor: hidden_states = self.dense(hidden_states) hidden_states = self.dropout(hidden_states) hidden_states = hidden_states + input_tensor return hidden_states # Copied from transformers.models.vit.modeling_vit.ViTLayer with ViT->ViTMAE,VIT->VITMAE class ViTMAELayer(GradientCheckpointingLayer): """This corresponds to the Block class in the timm implementation.""" def __init__(self, config: ViTMAEConfig) -> None: super().__init__() self.chunk_size_feed_forward = config.chunk_size_feed_forward self.seq_len_dim = 1 self.attention = ViTMAEAttention(config) self.intermediate = ViTMAEIntermediate(config) self.output = ViTMAEOutput(config) self.layernorm_before = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.layernorm_after = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) def forward( self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> Union[tuple[torch.Tensor, torch.Tensor], tuple[torch.Tensor]]: self_attention_outputs = self.attention( self.layernorm_before(hidden_states), # in ViTMAE, layernorm is applied before self-attention head_mask, output_attentions=output_attentions, ) attention_output = self_attention_outputs[0] outputs = self_attention_outputs[1:] # add self attentions if we output attention weights # first residual connection hidden_states = attention_output + hidden_states # in ViTMAE, layernorm is also applied after self-attention layer_output = self.layernorm_after(hidden_states) layer_output = self.intermediate(layer_output) # second residual connection is done here layer_output = self.output(layer_output, hidden_states) outputs = (layer_output,) + outputs return outputs # Copied from transformers.models.vit.modeling_vit.ViTEncoder with ViT->ViTMAE class ViTMAEEncoder(nn.Module): def __init__(self, config: ViTMAEConfig) -> None: super().__init__() self.config = config self.layer = nn.ModuleList([ViTMAELayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False def forward( self, hidden_states: torch.Tensor, head_mask: Optional[torch.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, ) -> Union[tuple, BaseModelOutput]: all_hidden_states = () if output_hidden_states else None all_self_attentions = () if output_attentions else None for i, layer_module in enumerate(self.layer): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) layer_head_mask = head_mask[i] if head_mask is not None else None layer_outputs = layer_module(hidden_states, layer_head_mask, output_attentions) hidden_states = layer_outputs[0] if output_attentions: all_self_attentions = all_self_attentions + (layer_outputs[1],) if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple(v for v in [hidden_states, all_hidden_states, all_self_attentions] if v is not None) return BaseModelOutput( last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_self_attentions, ) @auto_docstring class ViTMAEPreTrainedModel(PreTrainedModel): config: ViTMAEConfig base_model_prefix = "vit" main_input_name = "pixel_values" supports_gradient_checkpointing = True _supports_sdpa = True _supports_flash_attn = True _supports_flex_attn = True _supports_attention_backend = True def _init_weights(self, module): """Initialize the weights""" if isinstance(module, (nn.Linear, nn.Conv2d)): # Slightly different from the TF version which uses truncated_normal for initialization # cf https://github.com/pytorch/pytorch/pull/5617 module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) elif isinstance(module, ViTMAEEmbeddings): module.initialize_weights() elif isinstance(module, ViTMAEDecoder): module.mask_token.data.zero_() module.decoder_pos_embed.data.zero_() @auto_docstring class ViTMAEModel(ViTMAEPreTrainedModel): def __init__(self, config): super().__init__(config) self.config = config self.embeddings = ViTMAEEmbeddings(config) self.encoder = ViTMAEEncoder(config) self.layernorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) # Initialize weights and apply final processing self.post_init() def get_input_embeddings(self): return self.embeddings.patch_embeddings def _prune_heads(self, heads_to_prune): """ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base class PreTrainedModel """ for layer, heads in heads_to_prune.items(): self.encoder.layer[layer].attention.prune_heads(heads) @auto_docstring def forward( self, pixel_values: Optional[torch.FloatTensor] = None, noise: Optional[torch.FloatTensor] = None, head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, interpolate_pos_encoding: bool = False, ) -> Union[tuple, ViTMAEModelOutput]: r""" noise (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): Mainly used for testing purposes to control randomness and maintain the reproducibility interpolate_pos_encoding (`bool`, *optional*, default `False`): Whether to interpolate the pre-trained position encodings. This is mainly used to use the model on higher resolution images. Examples: ```python >>> from transformers import AutoImageProcessor, ViTMAEModel >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("facebook/vit-mae-base") >>> model = ViTMAEModel.from_pretrained("facebook/vit-mae-base") >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ```""" output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict if pixel_values is None: raise ValueError("You have to specify pixel_values") # Prepare head mask if needed # 1.0 in head_mask indicate we keep the head # attention_probs has shape bsz x n_heads x N x N # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) embedding_output, mask, ids_restore = self.embeddings( pixel_values, noise=noise, interpolate_pos_encoding=interpolate_pos_encoding ) encoder_outputs = self.encoder( embedding_output, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output = encoder_outputs[0] sequence_output = self.layernorm(sequence_output) if not return_dict: return (sequence_output, mask, ids_restore) + encoder_outputs[1:] return ViTMAEModelOutput( last_hidden_state=sequence_output, mask=mask, ids_restore=ids_restore, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, ) class ViTMAEDecoder(nn.Module): def __init__(self, config, num_patches): super().__init__() self.decoder_embed = nn.Linear(config.hidden_size, config.decoder_hidden_size, bias=True) self.mask_token = nn.Parameter(torch.zeros(1, 1, config.decoder_hidden_size)) self.decoder_pos_embed = nn.Parameter( torch.zeros(1, num_patches + 1, config.decoder_hidden_size), requires_grad=False ) # fixed sin-cos embedding decoder_config = deepcopy(config) decoder_config.hidden_size = config.decoder_hidden_size decoder_config.num_hidden_layers = config.decoder_num_hidden_layers decoder_config.num_attention_heads = config.decoder_num_attention_heads decoder_config.intermediate_size = config.decoder_intermediate_size self.decoder_layers = nn.ModuleList( [ViTMAELayer(decoder_config) for _ in range(config.decoder_num_hidden_layers)] ) self.decoder_norm = nn.LayerNorm(config.decoder_hidden_size, eps=config.layer_norm_eps) self.decoder_pred = nn.Linear( config.decoder_hidden_size, config.patch_size**2 * config.num_channels, bias=True ) # encoder to decoder self.gradient_checkpointing = False self.config = config self.initialize_weights(num_patches) def interpolate_pos_encoding(self, embeddings: torch.Tensor) -> torch.Tensor: """ This method is a modified version of the interpolation function for ViT-mae model at the decoder, that allows to interpolate the pre-trained decoder position encodings, to be able to use the model on higher resolution images. Adapted from: https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac952ab558447af1fa1365362a/vision_transformer.py#L174 """ # -1 removes the class dimension since we later append it without interpolation embeddings_positions = embeddings.shape[1] - 1 # Separation of class token and patch tokens class_pos_embed = self.decoder_pos_embed[:, :1] patch_pos_embed = self.decoder_pos_embed[:, 1:] # To retain the final 3d tensor with the required dimensions dim = self.decoder_pos_embed.shape[-1] # Increasing a dimension to enable bicubic interpolation patch_pos_embed = patch_pos_embed.reshape(1, 1, -1, dim) # permute to bring the dimension to be interpolated, to the last patch_pos_embed = patch_pos_embed.permute(0, 3, 1, 2) # Interpolating the decoder position embeddings shape wrt embeddings shape i.e (x). # we keep the second last dimension constant patch_pos_embed = nn.functional.interpolate( patch_pos_embed, size=(patch_pos_embed.shape[-2], embeddings_positions), mode="bicubic", align_corners=False, ) # Converting back to the original shape patch_pos_embed = patch_pos_embed.permute(0, 2, 3, 1).view(1, -1, dim) # Adding the class token back return torch.cat((class_pos_embed, patch_pos_embed), dim=1) def initialize_weights(self, num_patches): # initialize (and freeze) position embeddings by sin-cos embedding decoder_pos_embed = get_2d_sincos_pos_embed( self.decoder_pos_embed.shape[-1], int(num_patches**0.5), add_cls_token=True ) self.decoder_pos_embed.data.copy_(torch.from_numpy(decoder_pos_embed).float().unsqueeze(0)) # timm's trunc_normal_(std=.02) is effectively normal_(std=0.02) as cutoff is too big (2.) torch.nn.init.normal_(self.mask_token, std=self.config.initializer_range) def forward( self, hidden_states, ids_restore, output_attentions=False, output_hidden_states=False, return_dict=True, interpolate_pos_encoding: bool = False, ): # embed tokens x = self.decoder_embed(hidden_states) # append mask tokens to sequence mask_tokens = self.mask_token.repeat(x.shape[0], ids_restore.shape[1] + 1 - x.shape[1], 1) x_ = torch.cat([x[:, 1:, :], mask_tokens], dim=1) # no cls token # unshuffle x_ = torch.gather(x_, dim=1, index=ids_restore.unsqueeze(-1).repeat(1, 1, x.shape[2]).to(x_.device)) x = torch.cat([x[:, :1, :], x_], dim=1) # append cls token # add pos embed if interpolate_pos_encoding: decoder_pos_embed = self.interpolate_pos_encoding(x) else: decoder_pos_embed = self.decoder_pos_embed hidden_states = x + decoder_pos_embed # apply Transformer layers (blocks) all_hidden_states = () if output_hidden_states else None all_self_attentions = () if output_attentions else None for i, layer_module in enumerate(self.decoder_layers): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) layer_outputs = layer_module(hidden_states, head_mask=None, output_attentions=output_attentions) hidden_states = layer_outputs[0] if output_attentions: all_self_attentions = all_self_attentions + (layer_outputs[1],) if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) hidden_states = self.decoder_norm(hidden_states) # predictor projection logits = self.decoder_pred(hidden_states) # remove cls token logits = logits[:, 1:, :] if not return_dict: return tuple(v for v in [logits, all_hidden_states, all_self_attentions] if v is not None) return ViTMAEDecoderOutput( logits=logits, hidden_states=all_hidden_states, attentions=all_self_attentions, ) @auto_docstring( custom_intro=""" The ViTMAE Model transformer with the decoder on top for self-supervised pre-training. <Tip> Note that we provide a script to pre-train this model on custom data in our [examples directory](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining). </Tip> """ ) class ViTMAEForPreTraining(ViTMAEPreTrainedModel): def __init__(self, config): super().__init__(config) self.config = config self.vit = ViTMAEModel(config) self.decoder = ViTMAEDecoder(config, num_patches=self.vit.embeddings.num_patches) # Initialize weights and apply final processing self.post_init() def get_input_embeddings(self): return self.vit.embeddings.patch_embeddings def _prune_heads(self, heads_to_prune): """ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base class PreTrainedModel """ for layer, heads in heads_to_prune.items(): self.encoder.layer[layer].attention.prune_heads(heads) def patchify(self, pixel_values, interpolate_pos_encoding: bool = False): """ Args: pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): Pixel values. interpolate_pos_encoding (`bool`, *optional*, default `False`): interpolation flag passed during the forward pass. Returns: `torch.FloatTensor` of shape `(batch_size, num_patches, patch_size**2 * num_channels)`: Patchified pixel values. """ patch_size, num_channels = self.config.patch_size, self.config.num_channels # sanity checks if not interpolate_pos_encoding and ( pixel_values.shape[2] != pixel_values.shape[3] or pixel_values.shape[2] % patch_size != 0 ): raise ValueError("Make sure the pixel values have a squared size that is divisible by the patch size") if pixel_values.shape[1] != num_channels: raise ValueError( "Make sure the number of channels of the pixel values is equal to the one set in the configuration" ) # patchify batch_size = pixel_values.shape[0] num_patches_h = pixel_values.shape[2] // patch_size num_patches_w = pixel_values.shape[3] // patch_size patchified_pixel_values = pixel_values.reshape( batch_size, num_channels, num_patches_h, patch_size, num_patches_w, patch_size ) patchified_pixel_values = torch.einsum("nchpwq->nhwpqc", patchified_pixel_values) patchified_pixel_values = patchified_pixel_values.reshape( batch_size, num_patches_h * num_patches_w, patch_size**2 * num_channels ) return patchified_pixel_values def unpatchify(self, patchified_pixel_values, original_image_size: Optional[tuple[int, int]] = None): """ Args: patchified_pixel_values (`torch.FloatTensor` of shape `(batch_size, num_patches, patch_size**2 * num_channels)`: Patchified pixel values. original_image_size (`tuple[int, int]`, *optional*): Original image size. Returns: `torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`: Pixel values. """ patch_size, num_channels = self.config.patch_size, self.config.num_channels original_image_size = ( original_image_size if original_image_size is not None else (self.config.image_size, self.config.image_size) ) original_height, original_width = original_image_size num_patches_h = original_height // patch_size num_patches_w = original_width // patch_size # sanity check if num_patches_h * num_patches_w != patchified_pixel_values.shape[1]: raise ValueError( f"The number of patches in the patchified pixel values {patchified_pixel_values.shape[1]}, does not match the number of patches on original image {num_patches_h}*{num_patches_w}" ) # unpatchify batch_size = patchified_pixel_values.shape[0] patchified_pixel_values = patchified_pixel_values.reshape( batch_size, num_patches_h, num_patches_w, patch_size, patch_size, num_channels, ) patchified_pixel_values = torch.einsum("nhwpqc->nchpwq", patchified_pixel_values) pixel_values = patchified_pixel_values.reshape( batch_size, num_channels, num_patches_h * patch_size, num_patches_w * patch_size, ) return pixel_values def forward_loss(self, pixel_values, pred, mask, interpolate_pos_encoding: bool = False): """ Args: pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): Pixel values. pred (`torch.FloatTensor` of shape `(batch_size, num_patches, patch_size**2 * num_channels)`: Predicted pixel values. mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`): Tensor indicating which patches are masked (1) and which are not (0). interpolate_pos_encoding (`bool`, *optional*, default `False`): interpolation flag passed during the forward pass. Returns: `torch.FloatTensor`: Pixel reconstruction loss. """ target = self.patchify(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding) if self.config.norm_pix_loss: mean = target.mean(dim=-1, keepdim=True) var = target.var(dim=-1, keepdim=True) target = (target - mean) / (var + 1.0e-6) ** 0.5 loss = (pred - target) ** 2 loss = loss.mean(dim=-1) # [N, L], mean loss per patch loss = (loss * mask).sum() / mask.sum() # mean loss on removed patches return loss @auto_docstring def forward( self, pixel_values: Optional[torch.FloatTensor] = None, noise: Optional[torch.FloatTensor] = None, head_mask: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, interpolate_pos_encoding: bool = False, ) -> Union[tuple, ViTMAEForPreTrainingOutput]: r""" noise (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): Mainly used for testing purposes to control randomness and maintain the reproducibility interpolate_pos_encoding (`bool`, *optional*, default `False`): Whether to interpolate the pre-trained position encodings. This is mainly used to use the model on higher resolution images. Examples: ```python >>> from transformers import AutoImageProcessor, ViTMAEForPreTraining >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("facebook/vit-mae-base") >>> model = ViTMAEForPreTraining.from_pretrained("facebook/vit-mae-base") >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> loss = outputs.loss >>> mask = outputs.mask >>> ids_restore = outputs.ids_restore ```""" return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.vit( pixel_values, noise=noise, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, interpolate_pos_encoding=interpolate_pos_encoding, ) latent = outputs.last_hidden_state ids_restore = outputs.ids_restore mask = outputs.mask decoder_outputs = self.decoder(latent, ids_restore, interpolate_pos_encoding=interpolate_pos_encoding) logits = decoder_outputs.logits # shape (batch_size, num_patches, patch_size*patch_size*num_channels) loss = self.forward_loss(pixel_values, logits, mask, interpolate_pos_encoding=interpolate_pos_encoding) if not return_dict: output = (logits, mask, ids_restore) + outputs[2:] return ((loss,) + output) if loss is not None else output return ViTMAEForPreTrainingOutput( loss=loss, logits=logits, mask=mask, ids_restore=ids_restore, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) __all__ = ["ViTMAEForPreTraining", "ViTMAELayer", "ViTMAEModel", "ViTMAEPreTrainedModel"]
transformers/src/transformers/models/vit_mae/modeling_vit_mae.py/0
{ "file_path": "transformers/src/transformers/models/vit_mae/modeling_vit_mae.py", "repo_id": "transformers", "token_count": 19185 }
554
# coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Convert VitPose checkpoints from the original repository. URL: https://github.com/vitae-transformer/vitpose Notebook to get the original logits: https://colab.research.google.com/drive/1QDX_2POTpl6JaZAV2WIFjuiqDsDwiqMZ?usp=sharing. """ import argparse import os import re from typing import Optional import requests import torch from huggingface_hub import hf_hub_download from PIL import Image from transformers import VitPoseBackboneConfig, VitPoseConfig, VitPoseForPoseEstimation, VitPoseImageProcessor ORIGINAL_TO_CONVERTED_KEY_MAPPING = { r"patch_embed.proj": "embeddings.patch_embeddings.projection", r"pos_embed": "embeddings.position_embeddings", r"blocks": "encoder.layer", r"attn.proj": "attention.output.dense", r"attn": "attention.self", r"norm1": "layernorm_before", r"norm2": "layernorm_after", r"last_norm": "layernorm", r"keypoint_head": "head", r"final_layer": "conv", } MODEL_TO_FILE_NAME_MAPPING = { # VitPose models, simple decoder "vitpose-base-simple": "vitpose-b-simple.pth", # VitPose models, classic decoder "vitpose-base": "vitpose-b.pth", # VitPose models, COCO-AIC-MPII "vitpose-base-coco-aic-mpii": "vitpose_base_coco_aic_mpii.pth", # VitPose+ models "vitpose-plus-small": "vitpose+_small.pth", "vitpose-plus-base": "vitpose+_base.pth", "vitpose-plus-large": "vitpose+_large.pth", "vitpose-plus-huge": "vitpose+_huge.pth", } def get_config(model_name): if "plus" in model_name: num_experts = 6 if "small" in model_name: part_features = 96 out_indices = [12] elif "base" in model_name: part_features = 192 out_indices = [12] elif "large" in model_name: part_features = 256 out_indices = [24] elif "huge" in model_name: part_features = 320 out_indices = [32] else: raise ValueError(f"Model {model_name} not supported") else: num_experts = 1 part_features = 0 # size of the architecture if "small" in model_name: hidden_size = 384 num_hidden_layers = 12 num_attention_heads = 12 elif "large" in model_name: hidden_size = 1024 num_hidden_layers = 24 num_attention_heads = 16 elif "huge" in model_name: hidden_size = 1280 num_hidden_layers = 32 num_attention_heads = 16 backbone_config = VitPoseBackboneConfig( out_indices=out_indices, hidden_size=hidden_size, num_hidden_layers=num_hidden_layers, num_attention_heads=num_attention_heads, num_experts=num_experts, part_features=part_features, ) use_simple_decoder = "simple" in model_name edges = [ [15, 13], [13, 11], [16, 14], [14, 12], [11, 12], [5, 11], [6, 12], [5, 6], [5, 7], [6, 8], [7, 9], [8, 10], [1, 2], [0, 1], [0, 2], [1, 3], [2, 4], [3, 5], [4, 6], ] id2label = { 0: "Nose", 1: "L_Eye", 2: "R_Eye", 3: "L_Ear", 4: "R_Ear", 5: "L_Shoulder", 6: "R_Shoulder", 7: "L_Elbow", 8: "R_Elbow", 9: "L_Wrist", 10: "R_Wrist", 11: "L_Hip", 12: "R_Hip", 13: "L_Knee", 14: "R_Knee", 15: "L_Ankle", 16: "R_Ankle", } label2id = {v: k for k, v in id2label.items()} config = VitPoseConfig( backbone_config=backbone_config, num_labels=17, use_simple_decoder=use_simple_decoder, edges=edges, id2label=id2label, label2id=label2id, ) return config def convert_old_keys_to_new_keys(state_dict_keys: Optional[dict] = None): """ This function should be applied only once, on the concatenated keys to efficiently rename using the key mappings. """ output_dict = {} if state_dict_keys is not None: old_text = "\n".join(state_dict_keys) new_text = old_text for pattern, replacement in ORIGINAL_TO_CONVERTED_KEY_MAPPING.items(): if replacement is None: new_text = re.sub(pattern, "", new_text) # an empty line continue new_text = re.sub(pattern, replacement, new_text) output_dict = dict(zip(old_text.split("\n"), new_text.split("\n"))) return output_dict # We will verify our results on a COCO image def prepare_img(): url = "http://images.cocodataset.org/val2017/000000000139.jpg" image = Image.open(requests.get(url, stream=True).raw) return image @torch.no_grad() def write_model(model_name, model_path, push_to_hub, check_logits=True): # ------------------------------------------------------------ # Vision model params and config # ------------------------------------------------------------ # params from config config = get_config(model_name) # ------------------------------------------------------------ # Convert weights # ------------------------------------------------------------ # load original state_dict filename = MODEL_TO_FILE_NAME_MAPPING[model_name] print(f"Fetching all parameters from the checkpoint at {filename}...") checkpoint_path = hf_hub_download( repo_id="nielsr/vitpose-original-checkpoints", filename=filename, repo_type="model" ) print("Converting model...") original_state_dict = torch.load(checkpoint_path, map_location="cpu", weights_only=True)["state_dict"] all_keys = list(original_state_dict.keys()) new_keys = convert_old_keys_to_new_keys(all_keys) dim = config.backbone_config.hidden_size state_dict = {} for key in all_keys: new_key = new_keys[key] value = original_state_dict[key] if re.search("associate_heads", new_key) or re.search("backbone.cls_token", new_key): # This associated_heads is concept of auxiliary head so does not require in inference stage. # backbone.cls_token is optional forward function for dynamically change of size, see detail in https://github.com/ViTAE-Transformer/ViTPose/issues/34 pass elif re.search("qkv", new_key): state_dict[new_key.replace("self.qkv", "attention.query")] = value[:dim] state_dict[new_key.replace("self.qkv", "attention.key")] = value[dim : dim * 2] state_dict[new_key.replace("self.qkv", "attention.value")] = value[-dim:] elif re.search("head", new_key) and not config.use_simple_decoder: # Pattern for deconvolution layers deconv_pattern = r"deconv_layers\.(0|3)\.weight" new_key = re.sub(deconv_pattern, lambda m: f"deconv{int(m.group(1)) // 3 + 1}.weight", new_key) # Pattern for batch normalization layers bn_patterns = [ (r"deconv_layers\.(\d+)\.weight", r"batchnorm\1.weight"), (r"deconv_layers\.(\d+)\.bias", r"batchnorm\1.bias"), (r"deconv_layers\.(\d+)\.running_mean", r"batchnorm\1.running_mean"), (r"deconv_layers\.(\d+)\.running_var", r"batchnorm\1.running_var"), (r"deconv_layers\.(\d+)\.num_batches_tracked", r"batchnorm\1.num_batches_tracked"), ] for pattern, replacement in bn_patterns: if re.search(pattern, new_key): # Convert the layer number to the correct batch norm index layer_num = int(re.search(pattern, key).group(1)) bn_num = layer_num // 3 + 1 new_key = re.sub(pattern, replacement.replace(r"\1", str(bn_num)), new_key) state_dict[new_key] = value else: state_dict[new_key] = value print("Loading the checkpoint in a Vitpose model.") model = VitPoseForPoseEstimation(config) model.eval() model.load_state_dict(state_dict) print("Checkpoint loaded successfully.") # create image processor image_processor = VitPoseImageProcessor() # verify image processor image = prepare_img() boxes = [[[412.8, 157.61, 53.05, 138.01], [384.43, 172.21, 15.12, 35.74]]] pixel_values = image_processor(images=image, boxes=boxes, return_tensors="pt").pixel_values filepath = hf_hub_download(repo_id="nielsr/test-image", filename="vitpose_batch_data.pt", repo_type="dataset") original_pixel_values = torch.load(filepath, map_location="cpu", weights_only=True)["img"] # we allow for a small difference in the pixel values due to the original repository using cv2 assert torch.allclose(pixel_values, original_pixel_values, atol=1e-1) dataset_index = torch.tensor([0]) with torch.no_grad(): print("Shape of original_pixel_values: ", original_pixel_values.shape) print("First values of original_pixel_values: ", original_pixel_values[0, 0, :3, :3]) # first forward pass outputs = model(original_pixel_values, dataset_index=dataset_index) output_heatmap = outputs.heatmaps print("Shape of output_heatmap: ", output_heatmap.shape) print("First values: ", output_heatmap[0, 0, :3, :3]) # second forward pass (flipped) # this is done since the model uses `flip_test=True` in its test config original_pixel_values_flipped = torch.flip(original_pixel_values, [3]) outputs_flipped = model( original_pixel_values_flipped, dataset_index=dataset_index, flip_pairs=torch.tensor([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]]), ) output_flipped_heatmap = outputs_flipped.heatmaps outputs.heatmaps = (output_heatmap + output_flipped_heatmap) * 0.5 # Verify pose_results pose_results = image_processor.post_process_pose_estimation(outputs, boxes=boxes)[0] if check_logits: # Simple decoder checkpoints if model_name == "vitpose-base-simple": assert torch.allclose( pose_results[1]["keypoints"][0], torch.tensor([3.98180511e02, 1.81808380e02]), atol=5e-2, ) assert torch.allclose( pose_results[1]["scores"][0], torch.tensor([8.66642594e-01]), atol=5e-2, ) # Classic decoder checkpoints elif model_name == "vitpose-base": assert torch.allclose( pose_results[1]["keypoints"][0], torch.tensor([3.9807913e02, 1.8182812e02]), atol=5e-2, ) assert torch.allclose( pose_results[1]["scores"][0], torch.tensor([8.8235235e-01]), atol=5e-2, ) # COCO-AIC-MPII checkpoints elif model_name == "vitpose-base-coco-aic-mpii": assert torch.allclose( pose_results[1]["keypoints"][0], torch.tensor([3.98305542e02, 1.81741592e02]), atol=5e-2, ) assert torch.allclose( pose_results[1]["scores"][0], torch.tensor([8.69966745e-01]), atol=5e-2, ) # VitPose+ models elif model_name == "vitpose-plus-small": assert torch.allclose( pose_results[1]["keypoints"][0], torch.tensor([398.1597, 181.6902]), atol=5e-2, ) assert torch.allclose( pose_results[1]["scores"][0], torch.tensor(0.9051), atol=5e-2, ) elif model_name == "vitpose-plus-base": assert torch.allclose( pose_results[1]["keypoints"][0], torch.tensor([3.98201294e02, 1.81728302e02]), atol=5e-2, ) assert torch.allclose( pose_results[1]["scores"][0], torch.tensor([8.75046968e-01]), atol=5e-2, ) elif model_name == "vitpose-plus-large": assert torch.allclose( pose_results[1]["keypoints"][0], torch.tensor([398.1409, 181.7412]), atol=5e-2, ) assert torch.allclose( pose_results[1]["scores"][0], torch.tensor(0.8746), atol=5e-2, ) elif model_name == "vitpose-plus-huge": assert torch.allclose( pose_results[1]["keypoints"][0], torch.tensor([398.2079, 181.8026]), atol=5e-2, ) assert torch.allclose( pose_results[1]["scores"][0], torch.tensor(0.8693), atol=5e-2, ) else: raise ValueError("Model not supported") print("Conversion successfully done.") if model_path is not None: os.makedirs(model_path, exist_ok=True) model.save_pretrained(model_path) image_processor.save_pretrained(model_path) if push_to_hub: print(f"Pushing model and image processor for {model_name} to hub") # we created a community organization on the hub for this model # maintained by the Transformers team model.push_to_hub(f"usyd-community/{model_name}") image_processor.push_to_hub(f"usyd-community/{model_name}") def main(): parser = argparse.ArgumentParser() # Required parameters parser.add_argument( "--model_name", default="vitpose-base-simple", choices=MODEL_TO_FILE_NAME_MAPPING.keys(), type=str, help="Name of the VitPose model you'd like to convert.", ) parser.add_argument( "--pytorch_dump_folder_path", default=None, type=str, help="Path to store the converted model." ) parser.add_argument( "--push_to_hub", action="store_true", help="Whether or not to push the converted model to the 🤗 hub." ) parser.add_argument( "--check_logits", action="store_false", help="Whether or not to verify the logits of the converted model." ) args = parser.parse_args() write_model( model_path=args.pytorch_dump_folder_path, model_name=args.model_name, push_to_hub=args.push_to_hub, check_logits=args.check_logits, ) if __name__ == "__main__": main()
transformers/src/transformers/models/vitpose/convert_vitpose_to_hf.py/0
{ "file_path": "transformers/src/transformers/models/vitpose/convert_vitpose_to_hf.py", "repo_id": "transformers", "token_count": 7160 }
555
# coding=utf-8 # Copyright 2021 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Feature extractor class for Wav2Vec2 """ from typing import Optional, Union import numpy as np from ...feature_extraction_sequence_utils import SequenceFeatureExtractor from ...feature_extraction_utils import BatchFeature from ...utils import PaddingStrategy, TensorType, logging logger = logging.get_logger(__name__) class Wav2Vec2FeatureExtractor(SequenceFeatureExtractor): r""" Constructs a Wav2Vec2 feature extractor. This feature extractor inherits from [`~feature_extraction_sequence_utils.SequenceFeatureExtractor`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: feature_size (`int`, *optional*, defaults to 1): The feature dimension of the extracted features. sampling_rate (`int`, *optional*, defaults to 16000): The sampling rate at which the audio files should be digitalized expressed in hertz (Hz). padding_value (`float`, *optional*, defaults to 0.0): The value that is used to fill the padding values. do_normalize (`bool`, *optional*, defaults to `True`): Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly improve the performance for some models, *e.g.*, [wav2vec2-lv60](https://huggingface.co/models?search=lv60). return_attention_mask (`bool`, *optional*, defaults to `False`): Whether or not [`~Wav2Vec2FeatureExtractor.__call__`] should return `attention_mask`. <Tip> Wav2Vec2 models that have set `config.feat_extract_norm == "group"`, such as [wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base-960h), have **not** been trained using `attention_mask`. For such models, `input_values` should simply be padded with 0 and no `attention_mask` should be passed. For Wav2Vec2 models that have set `config.feat_extract_norm == "layer"`, such as [wav2vec2-lv60](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self), `attention_mask` should be passed for batched inference. </Tip>""" model_input_names = ["input_values", "attention_mask"] def __init__( self, feature_size=1, sampling_rate=16000, padding_value=0.0, return_attention_mask=False, do_normalize=True, **kwargs, ): super().__init__(feature_size=feature_size, sampling_rate=sampling_rate, padding_value=padding_value, **kwargs) self.return_attention_mask = return_attention_mask self.do_normalize = do_normalize @staticmethod def zero_mean_unit_var_norm( input_values: list[np.ndarray], attention_mask: list[np.ndarray], padding_value: float = 0.0 ) -> list[np.ndarray]: """ Every array in the list is normalized to have zero mean and unit variance """ if attention_mask is not None: attention_mask = np.array(attention_mask, np.int32) normed_input_values = [] for vector, length in zip(input_values, attention_mask.sum(-1)): normed_slice = (vector - vector[:length].mean()) / np.sqrt(vector[:length].var() + 1e-7) if length < normed_slice.shape[0]: normed_slice[length:] = padding_value normed_input_values.append(normed_slice) else: normed_input_values = [(x - x.mean()) / np.sqrt(x.var() + 1e-7) for x in input_values] return normed_input_values def __call__( self, raw_speech: Union[np.ndarray, list[float], list[np.ndarray], list[list[float]]], padding: Union[bool, str, PaddingStrategy] = False, max_length: Optional[int] = None, truncation: bool = False, pad_to_multiple_of: Optional[int] = None, return_attention_mask: Optional[bool] = None, return_tensors: Optional[Union[str, TensorType]] = None, sampling_rate: Optional[int] = None, **kwargs, ) -> BatchFeature: """ Main method to featurize and prepare for the model one or several sequence(s). Args: raw_speech (`np.ndarray`, `list[float]`, `list[np.ndarray]`, `list[list[float]]`): The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values. Must be mono channel audio, not stereo, i.e. single float per timestep. padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`): Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among: - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided). - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum acceptable input length for the model if that argument is not provided. - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different lengths). max_length (`int`, *optional*): Maximum length of the returned list and optionally padding length (see above). truncation (`bool`): Activates truncation to cut input sequences longer than *max_length* to *max_length*. pad_to_multiple_of (`int`, *optional*): If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability `>= 7.5` (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128. return_attention_mask (`bool`, *optional*): Whether to return the attention mask. If left to the default, will return the attention mask according to the specific feature_extractor's default. [What are attention masks?](../glossary#attention-mask) <Tip> Wav2Vec2 models that have set `config.feat_extract_norm == "group"`, such as [wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base-960h), have **not** been trained using `attention_mask`. For such models, `input_values` should simply be padded with 0 and no `attention_mask` should be passed. For Wav2Vec2 models that have set `config.feat_extract_norm == "layer"`, such as [wav2vec2-lv60](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self), `attention_mask` should be passed for batched inference. </Tip> return_tensors (`str` or [`~utils.TensorType`], *optional*): If set, will return tensors instead of list of python integers. Acceptable values are: - `'tf'`: Return TensorFlow `tf.constant` objects. - `'pt'`: Return PyTorch `torch.Tensor` objects. - `'np'`: Return Numpy `np.ndarray` objects. sampling_rate (`int`, *optional*): The sampling rate at which the `raw_speech` input was sampled. It is strongly recommended to pass `sampling_rate` at the forward call to prevent silent errors. padding_value (`float`, *optional*, defaults to 0.0): """ if sampling_rate is not None: if sampling_rate != self.sampling_rate: raise ValueError( f"The model corresponding to this feature extractor: {self} was trained using a sampling rate of" f" {self.sampling_rate}. Please make sure that the provided `raw_speech` input was sampled with" f" {self.sampling_rate} and not {sampling_rate}." ) else: logger.warning( f"It is strongly recommended to pass the `sampling_rate` argument to `{self.__class__.__name__}()`. " "Failing to do so can result in silent errors that might be hard to debug." ) is_batched_numpy = isinstance(raw_speech, np.ndarray) and len(raw_speech.shape) > 1 if is_batched_numpy and len(raw_speech.shape) > 2: raise ValueError(f"Only mono-channel audio is supported for input to {self}") is_batched = is_batched_numpy or ( isinstance(raw_speech, (list, tuple)) and (isinstance(raw_speech[0], (np.ndarray, tuple, list))) ) # always return batch if not is_batched: raw_speech = [raw_speech] # convert into correct format for padding encoded_inputs = BatchFeature({"input_values": raw_speech}) padded_inputs = self.pad( encoded_inputs, padding=padding, max_length=max_length, truncation=truncation, pad_to_multiple_of=pad_to_multiple_of, return_attention_mask=return_attention_mask, ) # convert input values to correct format input_values = padded_inputs["input_values"] if not isinstance(input_values[0], np.ndarray): padded_inputs["input_values"] = [np.asarray(array, dtype=np.float32) for array in input_values] elif ( not isinstance(input_values, np.ndarray) and isinstance(input_values[0], np.ndarray) and input_values[0].dtype is np.dtype(np.float64) ): padded_inputs["input_values"] = [array.astype(np.float32) for array in input_values] elif isinstance(input_values, np.ndarray) and input_values.dtype is np.dtype(np.float64): padded_inputs["input_values"] = input_values.astype(np.float32) # convert attention_mask to correct format attention_mask = padded_inputs.get("attention_mask") if attention_mask is not None: padded_inputs["attention_mask"] = [np.asarray(array, dtype=np.int32) for array in attention_mask] # zero-mean and unit-variance normalization if self.do_normalize: attention_mask = ( attention_mask if self._get_padding_strategies(padding, max_length=max_length) is not PaddingStrategy.DO_NOT_PAD else None ) padded_inputs["input_values"] = self.zero_mean_unit_var_norm( padded_inputs["input_values"], attention_mask=attention_mask, padding_value=self.padding_value ) if return_tensors is not None: padded_inputs = padded_inputs.convert_to_tensors(return_tensors) return padded_inputs __all__ = ["Wav2Vec2FeatureExtractor"]
transformers/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py/0
{ "file_path": "transformers/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py", "repo_id": "transformers", "token_count": 4834 }
556
import math from dataclasses import dataclass from typing import Optional, Union import torch from torch import nn from ...activations import ACT2FN from ...integrations.deepspeed import is_deepspeed_zero3_enabled from ...integrations.fsdp import is_fsdp_managed_module from ...modeling_layers import GradientCheckpointingLayer from ...modeling_outputs import BaseModelOutput, Wav2Vec2BaseModelOutput from ...modeling_utils import PreTrainedModel from ...utils import ModelOutput, auto_docstring, logging from ..wav2vec2.modeling_wav2vec2 import ( Wav2Vec2Adapter, Wav2Vec2AdapterLayer, Wav2Vec2FeatureEncoder, Wav2Vec2FeatureProjection, Wav2Vec2FeedForward, Wav2Vec2ForAudioFrameClassification, Wav2Vec2ForCTC, Wav2Vec2ForPreTraining, Wav2Vec2ForSequenceClassification, Wav2Vec2ForXVector, Wav2Vec2GumbelVectorQuantizer, Wav2Vec2Model, Wav2Vec2PositionalConvEmbedding, ) from .configuration_wav2vec2_conformer import Wav2Vec2ConformerConfig logger = logging.get_logger(__name__) _HIDDEN_STATES_START_POSITION = 2 @dataclass @auto_docstring( custom_intro=""" Output type of [`Wav2Vec2ConformerForPreTraining`], with potential hidden states and attentions. """ ) class Wav2Vec2ConformerForPreTrainingOutput(ModelOutput): r""" loss (*optional*, returned when `sample_negative_indices` are passed, `torch.FloatTensor` of shape `(1,)`): Total loss as the sum of the contrastive loss (L_m) and the diversity loss (L_d) as stated in the [official paper](https://arxiv.org/pdf/2006.11477.pdf) . (classification) loss. projected_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`): Hidden-states of the model projected to *config.proj_codevector_dim* that can be used to predict the masked projected quantized states. projected_quantized_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.proj_codevector_dim)`): Quantized extracted feature vectors projected to *config.proj_codevector_dim* representing the positive target vectors for contrastive loss. codevector_perplexity (`torch.FloatTensor` of shape `(1,)`): The perplexity of the codevector distribution, used to measure the diversity of the codebook. contrastive_loss (*optional*, returned when `sample_negative_indices` are passed, `torch.FloatTensor` of shape `(1,)`): The contrastive loss (L_m) as stated in the [official paper](https://arxiv.org/pdf/2006.11477.pdf) . diversity_loss (*optional*, returned when `sample_negative_indices` are passed, `torch.FloatTensor` of shape `(1,)`): The diversity loss (L_d) as stated in the [official paper](https://arxiv.org/pdf/2006.11477.pdf) . """ loss: Optional[torch.FloatTensor] = None projected_states: Optional[torch.FloatTensor] = None projected_quantized_states: Optional[torch.FloatTensor] = None codevector_perplexity: Optional[torch.FloatTensor] = None hidden_states: Optional[tuple[torch.FloatTensor]] = None attentions: Optional[tuple[torch.FloatTensor]] = None contrastive_loss: Optional[torch.FloatTensor] = None diversity_loss: Optional[torch.FloatTensor] = None class Wav2Vec2ConformerPositionalConvEmbedding(Wav2Vec2PositionalConvEmbedding): pass class Wav2Vec2ConformerRotaryPositionalEmbedding(nn.Module): """Rotary positional embedding Reference : https://blog.eleuther.ai/rotary-embeddings/ Paper: https://huggingface.co/papers/2104.09864 """ def __init__(self, config): super().__init__() dim = config.hidden_size // config.num_attention_heads base = config.rotary_embedding_base inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2, dtype=torch.int64).float() / dim)) self.register_buffer("inv_freq", inv_freq) self.cached_sequence_length = None self.cached_rotary_positional_embedding = None def forward(self, hidden_states): sequence_length = hidden_states.shape[1] if sequence_length == self.cached_sequence_length and self.cached_rotary_positional_embedding is not None: return self.cached_rotary_positional_embedding self.cached_sequence_length = sequence_length # Embeddings are computed in the dtype of the inv_freq constant time_stamps = torch.arange(sequence_length).type_as(self.inv_freq) freqs = torch.einsum("i,j->ij", time_stamps, self.inv_freq) embeddings = torch.cat((freqs, freqs), dim=-1) cos_embeddings = embeddings.cos()[:, None, None, :] sin_embeddings = embeddings.sin()[:, None, None, :] # Computed embeddings are cast to the dtype of the hidden state inputs self.cached_rotary_positional_embedding = torch.stack([cos_embeddings, sin_embeddings]).type_as(hidden_states) return self.cached_rotary_positional_embedding class Wav2Vec2ConformerRelPositionalEmbedding(nn.Module): """Relative positional encoding module.""" def __init__(self, config): super().__init__() self.max_len = config.max_source_positions self.d_model = config.hidden_size self.pe = None self.extend_pe(torch.tensor(0.0).expand(1, self.max_len)) def extend_pe(self, x): # Reset the positional encodings if self.pe is not None: # self.pe contains both positive and negative parts # the length of self.pe is 2 * input_len - 1 if self.pe.size(1) >= x.size(1) * 2 - 1: if self.pe.dtype != x.dtype or self.pe.device != x.device: self.pe = self.pe.to(dtype=x.dtype, device=x.device) return # Suppose `i` is the position of query vector and `j` is the # position of key vector. We use positive relative positions when keys # are to the left (i>j) and negative relative positions otherwise (i<j). pe_positive = torch.zeros(x.size(1), self.d_model) pe_negative = torch.zeros(x.size(1), self.d_model) position = torch.arange(0, x.size(1), dtype=torch.int64).float().unsqueeze(1) div_term = torch.exp( torch.arange(0, self.d_model, 2, dtype=torch.int64).float() * -(math.log(10000.0) / self.d_model) ) pe_positive[:, 0::2] = torch.sin(position * div_term) pe_positive[:, 1::2] = torch.cos(position * div_term) pe_negative[:, 0::2] = torch.sin(-1 * position * div_term) pe_negative[:, 1::2] = torch.cos(-1 * position * div_term) # Reverse the order of positive indices and concat both positive and # negative indices. This is used to support the shifting trick # as in https://huggingface.co/papers/1901.02860 pe_positive = torch.flip(pe_positive, [0]).unsqueeze(0) pe_negative = pe_negative[1:].unsqueeze(0) pe = torch.cat([pe_positive, pe_negative], dim=1) self.pe = pe.to(device=x.device, dtype=x.dtype) def forward(self, hidden_states: torch.Tensor): self.extend_pe(hidden_states) start_idx = self.pe.size(1) // 2 - hidden_states.size(1) + 1 end_idx = self.pe.size(1) // 2 + hidden_states.size(1) relative_position_embeddings = self.pe[:, start_idx:end_idx] return relative_position_embeddings class Wav2Vec2ConformerFeatureEncoder(Wav2Vec2FeatureEncoder): pass class Wav2Vec2ConformerFeatureProjection(Wav2Vec2FeatureProjection): pass class Wav2Vec2ConformerFeedForward(Wav2Vec2FeedForward): pass class Wav2Vec2ConformerConvolutionModule(nn.Module): """Convolution block used in the conformer block""" def __init__(self, config): super().__init__() if (config.conv_depthwise_kernel_size - 1) % 2 == 1: raise ValueError("`config.conv_depthwise_kernel_size` should be a odd number for 'SAME' padding") self.layer_norm = nn.LayerNorm(config.hidden_size) self.pointwise_conv1 = nn.Conv1d( config.hidden_size, 2 * config.hidden_size, kernel_size=1, stride=1, padding=0, bias=False, ) self.glu = nn.GLU(dim=1) self.depthwise_conv = nn.Conv1d( config.hidden_size, config.hidden_size, config.conv_depthwise_kernel_size, stride=1, padding=(config.conv_depthwise_kernel_size - 1) // 2, groups=config.hidden_size, bias=False, ) self.batch_norm = nn.BatchNorm1d(config.hidden_size) self.activation = ACT2FN[config.hidden_act] self.pointwise_conv2 = nn.Conv1d( config.hidden_size, config.hidden_size, kernel_size=1, stride=1, padding=0, bias=False, ) self.dropout = nn.Dropout(config.conformer_conv_dropout) def forward(self, hidden_states): hidden_states = self.layer_norm(hidden_states) # exchange the temporal dimension and the feature dimension hidden_states = hidden_states.transpose(1, 2) # GLU mechanism # => (batch, 2*channel, dim) hidden_states = self.pointwise_conv1(hidden_states) # => (batch, channel, dim) hidden_states = self.glu(hidden_states) # 1D Depthwise Conv hidden_states = self.depthwise_conv(hidden_states) hidden_states = self.batch_norm(hidden_states) hidden_states = self.activation(hidden_states) hidden_states = self.pointwise_conv2(hidden_states) hidden_states = self.dropout(hidden_states) hidden_states = hidden_states.transpose(1, 2) return hidden_states class Wav2Vec2ConformerSelfAttention(nn.Module): """Construct an Wav2Vec2ConformerSelfAttention object. Can be enhanced with rotary or relative position embeddings. """ def __init__(self, config): super().__init__() self.head_size = config.hidden_size // config.num_attention_heads self.num_heads = config.num_attention_heads self.position_embeddings_type = config.position_embeddings_type self.linear_q = nn.Linear(config.hidden_size, config.hidden_size) self.linear_k = nn.Linear(config.hidden_size, config.hidden_size) self.linear_v = nn.Linear(config.hidden_size, config.hidden_size) self.linear_out = nn.Linear(config.hidden_size, config.hidden_size) self.dropout = nn.Dropout(p=config.attention_dropout) if self.position_embeddings_type == "relative": # linear transformation for positional encoding self.linear_pos = nn.Linear(config.hidden_size, config.hidden_size, bias=False) # these two learnable bias are used in matrix c and matrix d # as described in https://huggingface.co/papers/1901.02860 Section 3.3 self.pos_bias_u = nn.Parameter(torch.zeros(self.num_heads, self.head_size)) self.pos_bias_v = nn.Parameter(torch.zeros(self.num_heads, self.head_size)) def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor] = None, relative_position_embeddings: Optional[torch.Tensor] = None, output_attentions: bool = False, ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]: # self-attention mechanism batch_size, sequence_length, hidden_size = hidden_states.size() # make sure query/key states can be != value states query_key_states = hidden_states value_states = hidden_states if self.position_embeddings_type == "rotary": if relative_position_embeddings is None: raise ValueError( "`relative_position_embeddings` has to be defined when `self.position_embeddings_type == 'rotary'" ) query_key_states = self._apply_rotary_embedding(query_key_states, relative_position_embeddings) # project query_key_states and value_states query = self.linear_q(query_key_states).view(batch_size, -1, self.num_heads, self.head_size) key = self.linear_k(query_key_states).view(batch_size, -1, self.num_heads, self.head_size) value = self.linear_v(value_states).view(batch_size, -1, self.num_heads, self.head_size) # => (batch, head, time1, d_k) query = query.transpose(1, 2) key = key.transpose(1, 2) value = value.transpose(1, 2) if self.position_embeddings_type == "relative": if relative_position_embeddings is None: raise ValueError( "`relative_position_embeddings` has to be defined when `self.position_embeddings_type ==" " 'relative'" ) # apply relative_position_embeddings to qk scores # as proposed in Transformer_XL: https://huggingface.co/papers/1901.02860 scores = self._apply_relative_embeddings( query=query, key=key, relative_position_embeddings=relative_position_embeddings ) else: scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(self.head_size) # apply attention_mask if necessary if attention_mask is not None: scores = scores + attention_mask # => (batch, head, time1, time2) probs = torch.softmax(scores, dim=-1) probs = self.dropout(probs) # => (batch, head, time1, d_k) hidden_states = torch.matmul(probs, value) # => (batch, time1, hidden_size) hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, -1, self.num_heads * self.head_size) hidden_states = self.linear_out(hidden_states) return hidden_states, probs def _apply_rotary_embedding(self, hidden_states, relative_position_embeddings): batch_size, sequence_length, hidden_size = hidden_states.size() hidden_states = hidden_states.view(batch_size, sequence_length, self.num_heads, self.head_size) cos = relative_position_embeddings[0, :sequence_length, ...] sin = relative_position_embeddings[1, :sequence_length, ...] # rotate hidden_states with rotary embeddings hidden_states = hidden_states.transpose(0, 1) rotated_states_begin = hidden_states[..., : self.head_size // 2] rotated_states_end = hidden_states[..., self.head_size // 2 :] rotated_states = torch.cat((-rotated_states_end, rotated_states_begin), dim=rotated_states_begin.ndim - 1) hidden_states = (hidden_states * cos) + (rotated_states * sin) hidden_states = hidden_states.transpose(0, 1) hidden_states = hidden_states.view(batch_size, sequence_length, self.num_heads * self.head_size) return hidden_states def _apply_relative_embeddings(self, query, key, relative_position_embeddings): # 1. project positional embeddings # => (batch, head, 2*time1-1, d_k) proj_relative_position_embeddings = self.linear_pos(relative_position_embeddings) proj_relative_position_embeddings = proj_relative_position_embeddings.view( relative_position_embeddings.size(0), -1, self.num_heads, self.head_size ) proj_relative_position_embeddings = proj_relative_position_embeddings.transpose(1, 2) proj_relative_position_embeddings = proj_relative_position_embeddings.transpose(2, 3) # 2. Add bias to query # => (batch, head, time1, d_k) query = query.transpose(1, 2) q_with_bias_u = (query + self.pos_bias_u).transpose(1, 2) q_with_bias_v = (query + self.pos_bias_v).transpose(1, 2) # 3. attention score: first compute matrix a and matrix c # as described in https://huggingface.co/papers/1901.02860 Section 3.3 # => (batch, head, time1, time2) scores_ac = torch.matmul(q_with_bias_u, key.transpose(-2, -1)) # 4. then compute matrix b and matrix d # => (batch, head, time1, 2*time1-1) scores_bd = torch.matmul(q_with_bias_v, proj_relative_position_embeddings) # 5. shift matrix b and matrix d zero_pad = torch.zeros((*scores_bd.size()[:3], 1), device=scores_bd.device, dtype=scores_bd.dtype) scores_bd_padded = torch.cat([zero_pad, scores_bd], dim=-1) scores_bd_padded_shape = scores_bd.size()[:2] + (scores_bd.shape[3] + 1, scores_bd.shape[2]) scores_bd_padded = scores_bd_padded.view(*scores_bd_padded_shape) scores_bd = scores_bd_padded[:, :, 1:].view_as(scores_bd) scores_bd = scores_bd[:, :, :, : scores_bd.size(-1) // 2 + 1] # 6. sum matrices # => (batch, head, time1, time2) scores = (scores_ac + scores_bd) / math.sqrt(self.head_size) return scores class Wav2Vec2ConformerEncoderLayer(GradientCheckpointingLayer): """Conformer block based on https://huggingface.co/papers/2005.08100.""" def __init__(self, config): super().__init__() embed_dim = config.hidden_size dropout = config.attention_dropout # Feed-forward 1 self.ffn1_layer_norm = nn.LayerNorm(embed_dim) self.ffn1 = Wav2Vec2ConformerFeedForward(config) # Self-Attention self.self_attn_layer_norm = nn.LayerNorm(embed_dim) self.self_attn_dropout = nn.Dropout(dropout) self.self_attn = Wav2Vec2ConformerSelfAttention(config) # Conformer Convolution self.conv_module = Wav2Vec2ConformerConvolutionModule(config) # Feed-forward 2 self.ffn2_layer_norm = nn.LayerNorm(embed_dim) self.ffn2 = Wav2Vec2ConformerFeedForward(config) self.final_layer_norm = nn.LayerNorm(embed_dim) def forward( self, hidden_states, attention_mask: Optional[torch.Tensor] = None, relative_position_embeddings: Optional[torch.Tensor] = None, output_attentions: bool = False, ): hidden_states = hidden_states # 1. Feed-Forward 1 layer residual = hidden_states hidden_states = self.ffn1_layer_norm(hidden_states) hidden_states = self.ffn1(hidden_states) hidden_states = hidden_states * 0.5 + residual residual = hidden_states # 2. Self-Attention layer hidden_states = self.self_attn_layer_norm(hidden_states) hidden_states, attn_weigts = self.self_attn( hidden_states=hidden_states, attention_mask=attention_mask, relative_position_embeddings=relative_position_embeddings, output_attentions=output_attentions, ) hidden_states = self.self_attn_dropout(hidden_states) hidden_states = hidden_states + residual # 3. Convolutional Layer residual = hidden_states hidden_states = self.conv_module(hidden_states) hidden_states = residual + hidden_states # 4. Feed-Forward 2 Layer residual = hidden_states hidden_states = self.ffn2_layer_norm(hidden_states) hidden_states = self.ffn2(hidden_states) hidden_states = hidden_states * 0.5 + residual hidden_states = self.final_layer_norm(hidden_states) return hidden_states, attn_weigts class Wav2Vec2ConformerEncoder(nn.Module): def __init__(self, config): super().__init__() self.config = config if config.position_embeddings_type == "relative": self.embed_positions = Wav2Vec2ConformerRelPositionalEmbedding(config) elif config.position_embeddings_type == "rotary": self.embed_positions = Wav2Vec2ConformerRotaryPositionalEmbedding(config) else: self.embed_positions = None self.pos_conv_embed = Wav2Vec2ConformerPositionalConvEmbedding(config) self.layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) self.dropout = nn.Dropout(config.hidden_dropout) self.layers = nn.ModuleList([Wav2Vec2ConformerEncoderLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False def forward( self, hidden_states, attention_mask=None, output_attentions=False, output_hidden_states=False, return_dict=True, ): all_hidden_states = () if output_hidden_states else None all_self_attentions = () if output_attentions else None if attention_mask is not None: # make sure padded tokens output 0 expand_attention_mask = attention_mask.unsqueeze(-1).repeat(1, 1, hidden_states.shape[2]) hidden_states[~expand_attention_mask] = 0.0 # extend attention_mask attention_mask = 1.0 - attention_mask[:, None, None, :].to(dtype=hidden_states.dtype) attention_mask = attention_mask * torch.finfo(hidden_states.dtype).min attention_mask = attention_mask.expand( attention_mask.shape[0], 1, attention_mask.shape[-1], attention_mask.shape[-1] ) hidden_states = self.dropout(hidden_states) if self.embed_positions is not None: relative_position_embeddings = self.embed_positions(hidden_states) else: relative_position_embeddings = None synced_gpus = is_deepspeed_zero3_enabled() or is_fsdp_managed_module(self) for i, layer in enumerate(self.layers): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) # add LayerDrop (see https://huggingface.co/papers/1909.11556 for description) dropout_probability = torch.rand([]) skip_the_layer = self.training and dropout_probability < self.config.layerdrop if not skip_the_layer or synced_gpus: # under fsdp or deepspeed zero3 all gpus must run in sync layer_outputs = layer( hidden_states, attention_mask=attention_mask, relative_position_embeddings=relative_position_embeddings, output_attentions=output_attentions, ) hidden_states = layer_outputs[0] if skip_the_layer: layer_outputs = (None, None) if output_attentions: all_self_attentions = all_self_attentions + (layer_outputs[1],) hidden_states = self.layer_norm(hidden_states) if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple(v for v in [hidden_states, all_hidden_states, all_self_attentions] if v is not None) return BaseModelOutput( last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_self_attentions, ) class Wav2Vec2ConformerGumbelVectorQuantizer(Wav2Vec2GumbelVectorQuantizer): pass class Wav2Vec2ConformerAdapter(Wav2Vec2Adapter): pass class Wav2Vec2ConformerAdapterLayer(Wav2Vec2AdapterLayer): pass @auto_docstring class Wav2Vec2ConformerPreTrainedModel(PreTrainedModel): config: Wav2Vec2ConformerConfig base_model_prefix = "wav2vec2_conformer" main_input_name = "input_values" supports_gradient_checkpointing = True def _init_weights(self, module): """Initialize the weights""" # Wav2Vec2ForPreTraining last 2 linear layers need standard Linear init. if isinstance(module, Wav2Vec2ConformerForPreTraining): module.project_hid.reset_parameters() module.project_q.reset_parameters() module.project_hid._is_hf_initialized = True module.project_q._is_hf_initialized = True # gumbel softmax requires special init elif isinstance(module, Wav2Vec2ConformerGumbelVectorQuantizer): module.weight_proj.weight.data.normal_(mean=0.0, std=1) module.weight_proj.bias.data.zero_() nn.init.uniform_(module.codevectors) elif isinstance(module, Wav2Vec2ConformerSelfAttention): if hasattr(module, "pos_bias_u"): nn.init.xavier_uniform_(module.pos_bias_u) if hasattr(module, "pos_bias_v"): nn.init.xavier_uniform_(module.pos_bias_v) elif isinstance(module, Wav2Vec2ConformerPositionalConvEmbedding): nn.init.normal_( module.conv.weight, mean=0, std=2 * math.sqrt(1 / (module.conv.kernel_size[0] * module.conv.in_channels)), ) nn.init.constant_(module.conv.bias, 0) elif isinstance(module, Wav2Vec2ConformerFeatureProjection): k = math.sqrt(1 / module.projection.in_features) nn.init.uniform_(module.projection.weight, a=-k, b=k) nn.init.uniform_(module.projection.bias, a=-k, b=k) elif isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, (nn.LayerNorm, nn.GroupNorm)): module.bias.data.zero_() module.weight.data.fill_(1.0) elif isinstance(module, nn.Conv1d): nn.init.kaiming_normal_(module.weight) if module.bias is not None: k = math.sqrt(module.groups / (module.in_channels * module.kernel_size[0])) nn.init.uniform_(module.bias, a=-k, b=k) def _get_feat_extract_output_lengths( self, input_lengths: Union[torch.LongTensor, int], add_adapter: Optional[bool] = None ): """ Computes the output length of the convolutional layers """ add_adapter = self.config.add_adapter if add_adapter is None else add_adapter def _conv_out_length(input_length, kernel_size, stride): # 1D convolutional layer output length formula taken # from https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html return torch.div(input_length - kernel_size, stride, rounding_mode="floor") + 1 for kernel_size, stride in zip(self.config.conv_kernel, self.config.conv_stride): input_lengths = _conv_out_length(input_lengths, kernel_size, stride) if add_adapter: for _ in range(self.config.num_adapter_layers): input_lengths = _conv_out_length(input_lengths, 1, self.config.adapter_stride) return input_lengths def _get_feature_vector_attention_mask( self, feature_vector_length: int, attention_mask: torch.LongTensor, add_adapter=None ): # Effectively attention_mask.sum(-1), but not inplace to be able to run # on inference mode. non_padded_lengths = attention_mask.cumsum(dim=-1)[:, -1] output_lengths = self._get_feat_extract_output_lengths(non_padded_lengths, add_adapter=add_adapter) output_lengths = output_lengths.to(torch.long) batch_size = attention_mask.shape[0] attention_mask = torch.zeros( (batch_size, feature_vector_length), dtype=attention_mask.dtype, device=attention_mask.device ) # these two operations makes sure that all values before the output lengths idxs are attended to attention_mask[(torch.arange(attention_mask.shape[0], device=attention_mask.device), output_lengths - 1)] = 1 attention_mask = attention_mask.flip([-1]).cumsum(-1).flip([-1]).bool() return attention_mask WAV2VEC2_CONFORMER_START_DOCSTRING = None # will be automatically redefined Wav2Vec2ConformerBaseModelOutput = Wav2Vec2BaseModelOutput class Wav2Vec2ConformerModel(Wav2Vec2ConformerPreTrainedModel, Wav2Vec2Model): def __init__(self, config: Wav2Vec2ConformerConfig): Wav2Vec2ConformerPreTrainedModel.__init__(config) self.config = config self.feature_extractor = Wav2Vec2ConformerFeatureEncoder(config) self.feature_projection = Wav2Vec2ConformerFeatureProjection(config) # model only needs masking vector if mask prob is > 0.0 if config.mask_time_prob > 0.0 or config.mask_feature_prob > 0.0: self.masked_spec_embed = nn.Parameter(torch.Tensor(config.hidden_size).uniform_()) self.encoder = Wav2Vec2ConformerEncoder(config) self.adapter = Wav2Vec2ConformerAdapter(config) if config.add_adapter else None # Initialize weights and apply final processing self.post_init() def freeze_feature_extractor(self): raise AttributeError("Not needed for Wav2Vec2Conformer") class Wav2Vec2ConformerForPreTraining(Wav2Vec2ForPreTraining): def __init__(self, config: Wav2Vec2ConformerConfig): super().__init__(config) def freeze_feature_extractor(self): raise AttributeError("Not needed for Wav2Vec2Conformer") class Wav2Vec2ConformerForCTC(Wav2Vec2ForCTC): def __init__(self, config, target_lang: Optional[str] = None): r""" target_lang (`str`, *optional*): Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or adapter.<lang>.bin. Only relevant when using an instance of [`UniSpeechSatForCTC`] with adapters. Uses 'eng' by default. """ super().__init__(config) def tie_weights(self): raise AttributeError("Not needed for Wav2Vec2Conformer") def freeze_feature_extractor(self): raise AttributeError("Not needed for Wav2Vec2Conformer") def freeze_base_model(self): raise AttributeError("Not needed for Wav2Vec2Conformer") class Wav2Vec2ConformerForSequenceClassification(Wav2Vec2ForSequenceClassification): def __init__(self, config): super().__init__(config) def freeze_feature_extractor(self): raise AttributeError("Not needed for Wav2Vec2Conformer") class Wav2Vec2ConformerForAudioFrameClassification(Wav2Vec2ForAudioFrameClassification): def __init__(self, config): super().__init__(config) def freeze_feature_extractor(self): raise AttributeError("Not needed for Wav2Vec2Conformer") class Wav2Vec2ConformerForXVector(Wav2Vec2ForXVector): def __init__(self, config): super().__init__(config) def freeze_feature_extractor(self): raise AttributeError("Not needed for Wav2Vec2Conformer") __all__ = [ "Wav2Vec2ConformerForAudioFrameClassification", "Wav2Vec2ConformerForCTC", "Wav2Vec2ConformerForPreTraining", "Wav2Vec2ConformerForSequenceClassification", "Wav2Vec2ConformerForXVector", "Wav2Vec2ConformerModel", "Wav2Vec2ConformerPreTrainedModel", ]
transformers/src/transformers/models/wav2vec2_conformer/modular_wav2vec2_conformer.py/0
{ "file_path": "transformers/src/transformers/models/wav2vec2_conformer/modular_wav2vec2_conformer.py", "repo_id": "transformers", "token_count": 13223 }
557
# coding=utf-8 # Copyright 2022 Facebook AI Research and the HuggingFace Inc. team. # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Flax XLM-RoBERTa model.""" from typing import Callable, Optional import flax.linen as nn import jax import jax.numpy as jnp import numpy as np from flax.core.frozen_dict import FrozenDict, freeze, unfreeze from flax.linen import combine_masks, make_causal_mask from flax.linen import partitioning as nn_partitioning from flax.linen.attention import dot_product_attention_weights from flax.traverse_util import flatten_dict, unflatten_dict from jax import lax from ...modeling_flax_outputs import ( FlaxBaseModelOutputWithPastAndCrossAttentions, FlaxBaseModelOutputWithPooling, FlaxBaseModelOutputWithPoolingAndCrossAttentions, FlaxCausalLMOutputWithCrossAttentions, FlaxMaskedLMOutput, FlaxMultipleChoiceModelOutput, FlaxQuestionAnsweringModelOutput, FlaxSequenceClassifierOutput, FlaxTokenClassifierOutput, ) from ...modeling_flax_utils import ACT2FN, FlaxPreTrainedModel, append_call_sample_docstring, overwrite_call_docstring from ...utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging from .configuration_xlm_roberta import XLMRobertaConfig logger = logging.get_logger(__name__) _CHECKPOINT_FOR_DOC = "FacebookAI/xlm-roberta-base" _CONFIG_FOR_DOC = "XLMRobertaConfig" remat = nn_partitioning.remat # Copied from transformers.models.roberta.modeling_flax_roberta.create_position_ids_from_input_ids def create_position_ids_from_input_ids(input_ids, padding_idx): """ Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols are ignored. This is modified from fairseq's `utils.make_positions`. Args: input_ids: jnp.ndarray padding_idx: int Returns: jnp.ndarray """ # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA. mask = (input_ids != padding_idx).astype("i4") if mask.ndim > 2: mask = mask.reshape((-1, mask.shape[-1])) incremental_indices = jnp.cumsum(mask, axis=1).astype("i4") * mask incremental_indices = incremental_indices.reshape(input_ids.shape) else: incremental_indices = jnp.cumsum(mask, axis=1).astype("i4") * mask return incremental_indices.astype("i4") + padding_idx XLM_ROBERTA_START_DOCSTRING = r""" This model inherits from [`FlaxPreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a [flax.linen.Module](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html) subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) Parameters: config ([`XLMRobertaConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~FlaxPreTrainedModel.from_pretrained`] method to load the model weights. """ XLM_ROBERTA_INPUTS_DOCSTRING = r""" Args: input_ids (`numpy.ndarray` of shape `({0})`): Indices of input sequence tokens in the vocabulary. Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. [What are input IDs?](../glossary#input-ids) attention_mask (`numpy.ndarray` of shape `({0})`, *optional*): Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. [What are attention masks?](../glossary#attention-mask) token_type_ids (`numpy.ndarray` of shape `({0})`, *optional*): Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: - 0 corresponds to a *sentence A* token, - 1 corresponds to a *sentence B* token. [What are token type IDs?](../glossary#token-type-ids) position_ids (`numpy.ndarray` of shape `({0})`, *optional*): Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.max_position_embeddings - 1]`. head_mask (`numpy.ndarray` of shape `({0})`, `optional): Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. """ # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertEmbeddings with Bert->XLMRoberta class FlaxXLMRobertaEmbeddings(nn.Module): """Construct the embeddings from word, position and token_type embeddings.""" config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 # the dtype of the computation def setup(self): self.word_embeddings = nn.Embed( self.config.vocab_size, self.config.hidden_size, embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range), dtype=self.dtype, ) self.position_embeddings = nn.Embed( self.config.max_position_embeddings, self.config.hidden_size, embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range), dtype=self.dtype, ) self.token_type_embeddings = nn.Embed( self.config.type_vocab_size, self.config.hidden_size, embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range), dtype=self.dtype, ) self.LayerNorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) def __call__(self, input_ids, token_type_ids, position_ids, attention_mask, deterministic: bool = True): # Embed inputs_embeds = self.word_embeddings(input_ids.astype("i4")) position_embeds = self.position_embeddings(position_ids.astype("i4")) token_type_embeddings = self.token_type_embeddings(token_type_ids.astype("i4")) # Sum all embeddings hidden_states = inputs_embeds + token_type_embeddings + position_embeds # Layer Norm hidden_states = self.LayerNorm(hidden_states) hidden_states = self.dropout(hidden_states, deterministic=deterministic) return hidden_states # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertSelfAttention with Bert->XLMRoberta class FlaxXLMRobertaSelfAttention(nn.Module): config: XLMRobertaConfig causal: bool = False dtype: jnp.dtype = jnp.float32 # the dtype of the computation def setup(self): self.head_dim = self.config.hidden_size // self.config.num_attention_heads if self.config.hidden_size % self.config.num_attention_heads != 0: raise ValueError( "`config.hidden_size`: {self.config.hidden_size} has to be a multiple of `config.num_attention_heads` " " : {self.config.num_attention_heads}" ) self.query = nn.Dense( self.config.hidden_size, dtype=self.dtype, kernel_init=jax.nn.initializers.normal(self.config.initializer_range), ) self.key = nn.Dense( self.config.hidden_size, dtype=self.dtype, kernel_init=jax.nn.initializers.normal(self.config.initializer_range), ) self.value = nn.Dense( self.config.hidden_size, dtype=self.dtype, kernel_init=jax.nn.initializers.normal(self.config.initializer_range), ) if self.causal: self.causal_mask = make_causal_mask( jnp.ones((1, self.config.max_position_embeddings), dtype="bool"), dtype="bool" ) def _split_heads(self, hidden_states): return hidden_states.reshape(hidden_states.shape[:2] + (self.config.num_attention_heads, self.head_dim)) def _merge_heads(self, hidden_states): return hidden_states.reshape(hidden_states.shape[:2] + (self.config.hidden_size,)) @nn.compact # Copied from transformers.models.bart.modeling_flax_bart.FlaxBartAttention._concatenate_to_cache def _concatenate_to_cache(self, key, value, query, attention_mask): """ This function takes projected key, value states from a single input token and concatenates the states to cached states from previous steps. This function is slightly adapted from the official Flax repository: https://github.com/google/flax/blob/491ce18759622506588784b4fca0e4bf05f8c8cd/flax/linen/attention.py#L252 """ # detect if we're initializing by absence of existing cache data. is_initialized = self.has_variable("cache", "cached_key") cached_key = self.variable("cache", "cached_key", jnp.zeros, key.shape, key.dtype) cached_value = self.variable("cache", "cached_value", jnp.zeros, value.shape, value.dtype) cache_index = self.variable("cache", "cache_index", lambda: jnp.array(0, dtype=jnp.int32)) if is_initialized: *batch_dims, max_length, num_heads, depth_per_head = cached_key.value.shape # update key, value caches with our new 1d spatial slices cur_index = cache_index.value indices = (0,) * len(batch_dims) + (cur_index, 0, 0) key = lax.dynamic_update_slice(cached_key.value, key, indices) value = lax.dynamic_update_slice(cached_value.value, value, indices) cached_key.value = key cached_value.value = value num_updated_cache_vectors = query.shape[1] cache_index.value = cache_index.value + num_updated_cache_vectors # causal mask for cached decoder self-attention: our single query position should only attend to those key positions that have already been generated and cached, not the remaining zero elements. pad_mask = jnp.broadcast_to( jnp.arange(max_length) < cur_index + num_updated_cache_vectors, tuple(batch_dims) + (1, num_updated_cache_vectors, max_length), ) attention_mask = combine_masks(pad_mask, attention_mask) return key, value, attention_mask def __call__( self, hidden_states, attention_mask, layer_head_mask, key_value_states: Optional[jnp.ndarray] = None, init_cache: bool = False, deterministic=True, output_attentions: bool = False, ): # if key_value_states are provided this layer is used as a cross-attention layer # for the decoder is_cross_attention = key_value_states is not None batch_size = hidden_states.shape[0] # get query proj query_states = self.query(hidden_states) # get key, value proj if is_cross_attention: # cross_attentions key_states = self.key(key_value_states) value_states = self.value(key_value_states) else: # self_attention key_states = self.key(hidden_states) value_states = self.value(hidden_states) query_states = self._split_heads(query_states) key_states = self._split_heads(key_states) value_states = self._split_heads(value_states) # handle cache prepare causal attention mask if self.causal: query_length, key_length = query_states.shape[1], key_states.shape[1] if self.has_variable("cache", "cached_key"): mask_shift = self.variables["cache"]["cache_index"] max_decoder_length = self.variables["cache"]["cached_key"].shape[1] causal_mask = lax.dynamic_slice( self.causal_mask, (0, 0, mask_shift, 0), (1, 1, query_length, max_decoder_length) ) else: causal_mask = self.causal_mask[:, :, :query_length, :key_length] causal_mask = jnp.broadcast_to(causal_mask, (batch_size,) + causal_mask.shape[1:]) # combine masks if needed if attention_mask is not None and self.causal: attention_mask = jnp.broadcast_to(jnp.expand_dims(attention_mask, axis=(-3, -2)), causal_mask.shape) attention_mask = combine_masks(attention_mask, causal_mask) elif self.causal: attention_mask = causal_mask elif attention_mask is not None: attention_mask = jnp.expand_dims(attention_mask, axis=(-3, -2)) # During fast autoregressive decoding, we feed one position at a time, # and cache the keys and values step by step. if self.causal and (self.has_variable("cache", "cached_key") or init_cache): key_states, value_states, attention_mask = self._concatenate_to_cache( key_states, value_states, query_states, attention_mask ) # Convert the boolean attention mask to an attention bias. if attention_mask is not None: # attention mask in the form of attention bias attention_bias = lax.select( attention_mask > 0, jnp.full(attention_mask.shape, 0.0).astype(self.dtype), jnp.full(attention_mask.shape, jnp.finfo(self.dtype).min).astype(self.dtype), ) else: attention_bias = None dropout_rng = None if not deterministic and self.config.attention_probs_dropout_prob > 0.0: dropout_rng = self.make_rng("dropout") attn_weights = dot_product_attention_weights( query_states, key_states, bias=attention_bias, dropout_rng=dropout_rng, dropout_rate=self.config.attention_probs_dropout_prob, broadcast_dropout=True, deterministic=deterministic, dtype=self.dtype, precision=None, ) # Mask heads if we want to if layer_head_mask is not None: attn_weights = jnp.einsum("...hqk,h->...hqk", attn_weights, layer_head_mask) attn_output = jnp.einsum("...hqk,...khd->...qhd", attn_weights, value_states) attn_output = attn_output.reshape(attn_output.shape[:2] + (-1,)) outputs = (attn_output, attn_weights) if output_attentions else (attn_output,) return outputs # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertSelfOutput with Bert->XLMRoberta class FlaxXLMRobertaSelfOutput(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 # the dtype of the computation def setup(self): self.dense = nn.Dense( self.config.hidden_size, kernel_init=jax.nn.initializers.normal(self.config.initializer_range), dtype=self.dtype, ) self.LayerNorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) def __call__(self, hidden_states, input_tensor, deterministic: bool = True): hidden_states = self.dense(hidden_states) hidden_states = self.dropout(hidden_states, deterministic=deterministic) hidden_states = self.LayerNorm(hidden_states + input_tensor) return hidden_states # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertAttention with Bert->XLMRoberta class FlaxXLMRobertaAttention(nn.Module): config: XLMRobertaConfig causal: bool = False dtype: jnp.dtype = jnp.float32 def setup(self): self.self = FlaxXLMRobertaSelfAttention(self.config, causal=self.causal, dtype=self.dtype) self.output = FlaxXLMRobertaSelfOutput(self.config, dtype=self.dtype) def __call__( self, hidden_states, attention_mask, layer_head_mask, key_value_states=None, init_cache=False, deterministic=True, output_attentions: bool = False, ): # Attention mask comes in as attention_mask.shape == (*batch_sizes, kv_length) # FLAX expects: attention_mask.shape == (*batch_sizes, 1, 1, kv_length) such that it is broadcastable # with attn_weights.shape == (*batch_sizes, num_heads, q_length, kv_length) attn_outputs = self.self( hidden_states, attention_mask, layer_head_mask=layer_head_mask, key_value_states=key_value_states, init_cache=init_cache, deterministic=deterministic, output_attentions=output_attentions, ) attn_output = attn_outputs[0] hidden_states = self.output(attn_output, hidden_states, deterministic=deterministic) outputs = (hidden_states,) if output_attentions: outputs += (attn_outputs[1],) return outputs # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertIntermediate with Bert->XLMRoberta class FlaxXLMRobertaIntermediate(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 # the dtype of the computation def setup(self): self.dense = nn.Dense( self.config.intermediate_size, kernel_init=jax.nn.initializers.normal(self.config.initializer_range), dtype=self.dtype, ) self.activation = ACT2FN[self.config.hidden_act] def __call__(self, hidden_states): hidden_states = self.dense(hidden_states) hidden_states = self.activation(hidden_states) return hidden_states # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertOutput with Bert->XLMRoberta class FlaxXLMRobertaOutput(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 # the dtype of the computation def setup(self): self.dense = nn.Dense( self.config.hidden_size, kernel_init=jax.nn.initializers.normal(self.config.initializer_range), dtype=self.dtype, ) self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) self.LayerNorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) def __call__(self, hidden_states, attention_output, deterministic: bool = True): hidden_states = self.dense(hidden_states) hidden_states = self.dropout(hidden_states, deterministic=deterministic) hidden_states = self.LayerNorm(hidden_states + attention_output) return hidden_states # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertLayer with Bert->XLMRoberta class FlaxXLMRobertaLayer(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 # the dtype of the computation def setup(self): self.attention = FlaxXLMRobertaAttention(self.config, causal=self.config.is_decoder, dtype=self.dtype) self.intermediate = FlaxXLMRobertaIntermediate(self.config, dtype=self.dtype) self.output = FlaxXLMRobertaOutput(self.config, dtype=self.dtype) if self.config.add_cross_attention: self.crossattention = FlaxXLMRobertaAttention(self.config, causal=False, dtype=self.dtype) def __call__( self, hidden_states, attention_mask, layer_head_mask, encoder_hidden_states: Optional[jnp.ndarray] = None, encoder_attention_mask: Optional[jnp.ndarray] = None, init_cache: bool = False, deterministic: bool = True, output_attentions: bool = False, ): # Self Attention attention_outputs = self.attention( hidden_states, attention_mask, layer_head_mask=layer_head_mask, init_cache=init_cache, deterministic=deterministic, output_attentions=output_attentions, ) attention_output = attention_outputs[0] # Cross-Attention Block if encoder_hidden_states is not None: cross_attention_outputs = self.crossattention( attention_output, attention_mask=encoder_attention_mask, layer_head_mask=layer_head_mask, key_value_states=encoder_hidden_states, deterministic=deterministic, output_attentions=output_attentions, ) attention_output = cross_attention_outputs[0] hidden_states = self.intermediate(attention_output) hidden_states = self.output(hidden_states, attention_output, deterministic=deterministic) outputs = (hidden_states,) if output_attentions: outputs += (attention_outputs[1],) if encoder_hidden_states is not None: outputs += (cross_attention_outputs[1],) return outputs # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertLayerCollection with Bert->XLMRoberta class FlaxXLMRobertaLayerCollection(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 # the dtype of the computation gradient_checkpointing: bool = False def setup(self): if self.gradient_checkpointing: FlaxXLMRobertaCheckpointLayer = remat(FlaxXLMRobertaLayer, static_argnums=(5, 6, 7)) self.layers = [ FlaxXLMRobertaCheckpointLayer(self.config, name=str(i), dtype=self.dtype) for i in range(self.config.num_hidden_layers) ] else: self.layers = [ FlaxXLMRobertaLayer(self.config, name=str(i), dtype=self.dtype) for i in range(self.config.num_hidden_layers) ] def __call__( self, hidden_states, attention_mask, head_mask, encoder_hidden_states: Optional[jnp.ndarray] = None, encoder_attention_mask: Optional[jnp.ndarray] = None, init_cache: bool = False, deterministic: bool = True, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, ): all_attentions = () if output_attentions else None all_hidden_states = () if output_hidden_states else None all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None # Check if head_mask has a correct number of layers specified if desired if head_mask is not None: if head_mask.shape[0] != (len(self.layers)): raise ValueError( f"The head_mask should be specified for {len(self.layers)} layers, but it is for " f" {head_mask.shape[0]}." ) for i, layer in enumerate(self.layers): if output_hidden_states: all_hidden_states += (hidden_states,) layer_outputs = layer( hidden_states, attention_mask, head_mask[i] if head_mask is not None else None, encoder_hidden_states, encoder_attention_mask, init_cache, deterministic, output_attentions, ) hidden_states = layer_outputs[0] if output_attentions: all_attentions += (layer_outputs[1],) if encoder_hidden_states is not None: all_cross_attentions += (layer_outputs[2],) if output_hidden_states: all_hidden_states += (hidden_states,) outputs = (hidden_states, all_hidden_states, all_attentions, all_cross_attentions) if not return_dict: return tuple(v for v in outputs if v is not None) return FlaxBaseModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions, cross_attentions=all_cross_attentions, ) # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertEncoder with Bert->XLMRoberta class FlaxXLMRobertaEncoder(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 # the dtype of the computation gradient_checkpointing: bool = False def setup(self): self.layer = FlaxXLMRobertaLayerCollection( self.config, dtype=self.dtype, gradient_checkpointing=self.gradient_checkpointing, ) def __call__( self, hidden_states, attention_mask, head_mask, encoder_hidden_states: Optional[jnp.ndarray] = None, encoder_attention_mask: Optional[jnp.ndarray] = None, init_cache: bool = False, deterministic: bool = True, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, ): return self.layer( hidden_states, attention_mask, head_mask=head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, init_cache=init_cache, deterministic=deterministic, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertPooler with Bert->XLMRoberta class FlaxXLMRobertaPooler(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 # the dtype of the computation def setup(self): self.dense = nn.Dense( self.config.hidden_size, kernel_init=jax.nn.initializers.normal(self.config.initializer_range), dtype=self.dtype, ) def __call__(self, hidden_states): cls_hidden_state = hidden_states[:, 0] cls_hidden_state = self.dense(cls_hidden_state) return nn.tanh(cls_hidden_state) # Copied from transformers.models.roberta.modeling_flax_roberta.FlaxRobertaLMHead with Roberta->XLMRoberta class FlaxXLMRobertaLMHead(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 bias_init: Callable[..., np.ndarray] = jax.nn.initializers.zeros def setup(self): self.dense = nn.Dense( self.config.hidden_size, dtype=self.dtype, kernel_init=jax.nn.initializers.normal(self.config.initializer_range), ) self.layer_norm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) self.decoder = nn.Dense( self.config.vocab_size, dtype=self.dtype, use_bias=False, kernel_init=jax.nn.initializers.normal(self.config.initializer_range), ) self.bias = self.param("bias", self.bias_init, (self.config.vocab_size,)) def __call__(self, hidden_states, shared_embedding=None): hidden_states = self.dense(hidden_states) hidden_states = ACT2FN["gelu"](hidden_states) hidden_states = self.layer_norm(hidden_states) if shared_embedding is not None: hidden_states = self.decoder.apply({"params": {"kernel": shared_embedding.T}}, hidden_states) else: hidden_states = self.decoder(hidden_states) bias = jnp.asarray(self.bias, self.dtype) hidden_states += bias return hidden_states # Copied from transformers.models.roberta.modeling_flax_roberta.FlaxRobertaClassificationHead with Roberta->XLMRoberta class FlaxXLMRobertaClassificationHead(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 def setup(self): self.dense = nn.Dense( self.config.hidden_size, dtype=self.dtype, kernel_init=jax.nn.initializers.normal(self.config.initializer_range), ) classifier_dropout = ( self.config.classifier_dropout if self.config.classifier_dropout is not None else self.config.hidden_dropout_prob ) self.dropout = nn.Dropout(rate=classifier_dropout) self.out_proj = nn.Dense( self.config.num_labels, dtype=self.dtype, kernel_init=jax.nn.initializers.normal(self.config.initializer_range), ) def __call__(self, hidden_states, deterministic=True): hidden_states = hidden_states[:, 0, :] # take <s> token (equiv. to [CLS]) hidden_states = self.dropout(hidden_states, deterministic=deterministic) hidden_states = self.dense(hidden_states) hidden_states = nn.tanh(hidden_states) hidden_states = self.dropout(hidden_states, deterministic=deterministic) hidden_states = self.out_proj(hidden_states) return hidden_states # Copied from transformers.models.roberta.modeling_flax_roberta.FlaxRobertaPreTrainedModel with Roberta->XLMRoberta, roberta->xlm-roberta, ROBERTA->XLM_ROBERTA class FlaxXLMRobertaPreTrainedModel(FlaxPreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = XLMRobertaConfig base_model_prefix = "xlm-roberta" module_class: nn.Module = None def __init__( self, config: XLMRobertaConfig, input_shape: tuple = (1, 1), seed: int = 0, dtype: jnp.dtype = jnp.float32, _do_init: bool = True, gradient_checkpointing: bool = False, **kwargs, ): module = self.module_class(config=config, dtype=dtype, gradient_checkpointing=gradient_checkpointing, **kwargs) super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype, _do_init=_do_init) # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertPreTrainedModel.enable_gradient_checkpointing def enable_gradient_checkpointing(self): self._module = self.module_class( config=self.config, dtype=self.dtype, gradient_checkpointing=True, ) def init_weights(self, rng: jax.random.PRNGKey, input_shape: tuple, params: FrozenDict = None) -> FrozenDict: # init input tensors input_ids = jnp.zeros(input_shape, dtype="i4") token_type_ids = jnp.ones_like(input_ids) position_ids = create_position_ids_from_input_ids(input_ids, self.config.pad_token_id) attention_mask = jnp.ones_like(input_ids) head_mask = jnp.ones((self.config.num_hidden_layers, self.config.num_attention_heads)) params_rng, dropout_rng = jax.random.split(rng) rngs = {"params": params_rng, "dropout": dropout_rng} if self.config.add_cross_attention: encoder_hidden_states = jnp.zeros(input_shape + (self.config.hidden_size,)) encoder_attention_mask = attention_mask module_init_outputs = self.module.init( rngs, input_ids, attention_mask, token_type_ids, position_ids, head_mask, encoder_hidden_states, encoder_attention_mask, return_dict=False, ) else: module_init_outputs = self.module.init( rngs, input_ids, attention_mask, token_type_ids, position_ids, head_mask, return_dict=False ) random_params = module_init_outputs["params"] if params is not None: random_params = flatten_dict(unfreeze(random_params)) params = flatten_dict(unfreeze(params)) for missing_key in self._missing_keys: params[missing_key] = random_params[missing_key] self._missing_keys = set() return freeze(unflatten_dict(params)) else: return random_params # Copied from transformers.models.bart.modeling_flax_bart.FlaxBartDecoderPreTrainedModel.init_cache def init_cache(self, batch_size, max_length): r""" Args: batch_size (`int`): batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache. max_length (`int`): maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized cache. """ # init input variables to retrieve cache input_ids = jnp.ones((batch_size, max_length), dtype="i4") attention_mask = jnp.ones_like(input_ids, dtype="i4") position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape) init_variables = self.module.init( jax.random.PRNGKey(0), input_ids, attention_mask, position_ids, return_dict=False, init_cache=True ) return unfreeze(init_variables["cache"]) @add_start_docstrings_to_model_forward(XLM_ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) def __call__( self, input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, params: Optional[dict] = None, dropout_rng: jax.random.PRNGKey = None, train: bool = False, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, past_key_values: Optional[dict] = None, ): output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.return_dict # init input tensors if not passed if token_type_ids is None: token_type_ids = jnp.zeros_like(input_ids) if position_ids is None: position_ids = create_position_ids_from_input_ids(input_ids, self.config.pad_token_id) if attention_mask is None: attention_mask = jnp.ones_like(input_ids) if head_mask is None: head_mask = jnp.ones((self.config.num_hidden_layers, self.config.num_attention_heads)) # Handle any PRNG if needed rngs = {} if dropout_rng is not None: rngs["dropout"] = dropout_rng inputs = {"params": params or self.params} if self.config.add_cross_attention: # if past_key_values are passed then cache is already initialized a private flag init_cache has to be passed # down to ensure cache is used. It has to be made sure that cache is marked as mutable so that it can be # changed by FlaxXLMRobertaAttention module if past_key_values: inputs["cache"] = past_key_values mutable = ["cache"] else: mutable = False outputs = self.module.apply( inputs, jnp.array(input_ids, dtype="i4"), jnp.array(attention_mask, dtype="i4"), token_type_ids=jnp.array(token_type_ids, dtype="i4"), position_ids=jnp.array(position_ids, dtype="i4"), head_mask=jnp.array(head_mask, dtype="i4"), encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, deterministic=not train, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, rngs=rngs, mutable=mutable, ) # add updated cache to model output if past_key_values is not None and return_dict: outputs, past_key_values = outputs outputs["past_key_values"] = unfreeze(past_key_values["cache"]) return outputs elif past_key_values is not None and not return_dict: outputs, past_key_values = outputs outputs = outputs[:1] + (unfreeze(past_key_values["cache"]),) + outputs[1:] else: outputs = self.module.apply( inputs, jnp.array(input_ids, dtype="i4"), jnp.array(attention_mask, dtype="i4"), token_type_ids=jnp.array(token_type_ids, dtype="i4"), position_ids=jnp.array(position_ids, dtype="i4"), head_mask=jnp.array(head_mask, dtype="i4"), deterministic=not train, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, rngs=rngs, ) return outputs # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertModule with Bert->XLMRoberta class FlaxXLMRobertaModule(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 # the dtype of the computation add_pooling_layer: bool = True gradient_checkpointing: bool = False def setup(self): self.embeddings = FlaxXLMRobertaEmbeddings(self.config, dtype=self.dtype) self.encoder = FlaxXLMRobertaEncoder( self.config, dtype=self.dtype, gradient_checkpointing=self.gradient_checkpointing, ) self.pooler = FlaxXLMRobertaPooler(self.config, dtype=self.dtype) def __call__( self, input_ids, attention_mask, token_type_ids: Optional[jnp.ndarray] = None, position_ids: Optional[jnp.ndarray] = None, head_mask: Optional[jnp.ndarray] = None, encoder_hidden_states: Optional[jnp.ndarray] = None, encoder_attention_mask: Optional[jnp.ndarray] = None, init_cache: bool = False, deterministic: bool = True, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, ): # make sure `token_type_ids` is correctly initialized when not passed if token_type_ids is None: token_type_ids = jnp.zeros_like(input_ids) # make sure `position_ids` is correctly initialized when not passed if position_ids is None: position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape) hidden_states = self.embeddings( input_ids, token_type_ids, position_ids, attention_mask, deterministic=deterministic ) outputs = self.encoder( hidden_states, attention_mask, head_mask=head_mask, deterministic=deterministic, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, init_cache=init_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) hidden_states = outputs[0] pooled = self.pooler(hidden_states) if self.add_pooling_layer else None if not return_dict: # if pooled is None, don't return it if pooled is None: return (hidden_states,) + outputs[1:] return (hidden_states, pooled) + outputs[1:] return FlaxBaseModelOutputWithPoolingAndCrossAttentions( last_hidden_state=hidden_states, pooler_output=pooled, hidden_states=outputs.hidden_states, attentions=outputs.attentions, cross_attentions=outputs.cross_attentions, ) @add_start_docstrings( "The bare XLM RoBERTa Model transformer outputting raw hidden-states without any specific head on top.", XLM_ROBERTA_START_DOCSTRING, ) class FlaxXLMRobertaModel(FlaxXLMRobertaPreTrainedModel): module_class = FlaxXLMRobertaModule append_call_sample_docstring(FlaxXLMRobertaModel, _CHECKPOINT_FOR_DOC, FlaxBaseModelOutputWithPooling, _CONFIG_FOR_DOC) # Copied from transformers.models.roberta.modeling_flax_roberta.FlaxRobertaForMaskedLMModule with Roberta->XLMRoberta class FlaxXLMRobertaForMaskedLMModule(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 gradient_checkpointing: bool = False def setup(self): self.roberta = FlaxXLMRobertaModule( config=self.config, add_pooling_layer=False, dtype=self.dtype, gradient_checkpointing=self.gradient_checkpointing, ) self.lm_head = FlaxXLMRobertaLMHead(config=self.config, dtype=self.dtype) def __call__( self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, deterministic: bool = True, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, ): # Model outputs = self.roberta( input_ids, attention_mask, token_type_ids, position_ids, head_mask, deterministic=deterministic, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) hidden_states = outputs[0] if self.config.tie_word_embeddings: shared_embedding = self.roberta.variables["params"]["embeddings"]["word_embeddings"]["embedding"] else: shared_embedding = None # Compute the prediction scores logits = self.lm_head(hidden_states, shared_embedding=shared_embedding) if not return_dict: return (logits,) + outputs[1:] return FlaxMaskedLMOutput( logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @add_start_docstrings("""XLM RoBERTa Model with a `language modeling` head on top.""", XLM_ROBERTA_START_DOCSTRING) class FlaxXLMRobertaForMaskedLM(FlaxXLMRobertaPreTrainedModel): module_class = FlaxXLMRobertaForMaskedLMModule append_call_sample_docstring( FlaxXLMRobertaForMaskedLM, _CHECKPOINT_FOR_DOC, FlaxBaseModelOutputWithPooling, _CONFIG_FOR_DOC, mask="<mask>", ) # Copied from transformers.models.roberta.modeling_flax_roberta.FlaxRobertaForSequenceClassificationModule with Roberta->XLMRoberta class FlaxXLMRobertaForSequenceClassificationModule(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 gradient_checkpointing: bool = False def setup(self): self.roberta = FlaxXLMRobertaModule( config=self.config, dtype=self.dtype, add_pooling_layer=False, gradient_checkpointing=self.gradient_checkpointing, ) self.classifier = FlaxXLMRobertaClassificationHead(config=self.config, dtype=self.dtype) def __call__( self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, deterministic: bool = True, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, ): # Model outputs = self.roberta( input_ids, attention_mask, token_type_ids, position_ids, head_mask, deterministic=deterministic, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output = outputs[0] logits = self.classifier(sequence_output, deterministic=deterministic) if not return_dict: return (logits,) + outputs[1:] return FlaxSequenceClassifierOutput( logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @add_start_docstrings( """ XLM Roberta Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. """, XLM_ROBERTA_START_DOCSTRING, ) class FlaxXLMRobertaForSequenceClassification(FlaxXLMRobertaPreTrainedModel): module_class = FlaxXLMRobertaForSequenceClassificationModule append_call_sample_docstring( FlaxXLMRobertaForSequenceClassification, _CHECKPOINT_FOR_DOC, FlaxSequenceClassifierOutput, _CONFIG_FOR_DOC, ) # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertForMultipleChoiceModule with Bert->XLMRoberta, with self.bert->self.roberta class FlaxXLMRobertaForMultipleChoiceModule(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 gradient_checkpointing: bool = False def setup(self): self.roberta = FlaxXLMRobertaModule( config=self.config, dtype=self.dtype, gradient_checkpointing=self.gradient_checkpointing, ) self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) self.classifier = nn.Dense(1, dtype=self.dtype) def __call__( self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, deterministic: bool = True, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, ): num_choices = input_ids.shape[1] input_ids = input_ids.reshape(-1, input_ids.shape[-1]) if input_ids is not None else None attention_mask = attention_mask.reshape(-1, attention_mask.shape[-1]) if attention_mask is not None else None token_type_ids = token_type_ids.reshape(-1, token_type_ids.shape[-1]) if token_type_ids is not None else None position_ids = position_ids.reshape(-1, position_ids.shape[-1]) if position_ids is not None else None # Model outputs = self.roberta( input_ids, attention_mask, token_type_ids, position_ids, head_mask, deterministic=deterministic, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) pooled_output = outputs[1] pooled_output = self.dropout(pooled_output, deterministic=deterministic) logits = self.classifier(pooled_output) reshaped_logits = logits.reshape(-1, num_choices) if not return_dict: return (reshaped_logits,) + outputs[2:] return FlaxMultipleChoiceModelOutput( logits=reshaped_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @add_start_docstrings( """ XLM Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. """, XLM_ROBERTA_START_DOCSTRING, ) class FlaxXLMRobertaForMultipleChoice(FlaxXLMRobertaPreTrainedModel): module_class = FlaxXLMRobertaForMultipleChoiceModule overwrite_call_docstring( FlaxXLMRobertaForMultipleChoice, XLM_ROBERTA_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length") ) append_call_sample_docstring( FlaxXLMRobertaForMultipleChoice, _CHECKPOINT_FOR_DOC, FlaxMultipleChoiceModelOutput, _CONFIG_FOR_DOC, ) # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertForTokenClassificationModule with Bert->XLMRoberta, with self.bert->self.roberta class FlaxXLMRobertaForTokenClassificationModule(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 gradient_checkpointing: bool = False def setup(self): self.roberta = FlaxXLMRobertaModule( config=self.config, dtype=self.dtype, add_pooling_layer=False, gradient_checkpointing=self.gradient_checkpointing, ) classifier_dropout = ( self.config.classifier_dropout if self.config.classifier_dropout is not None else self.config.hidden_dropout_prob ) self.dropout = nn.Dropout(rate=classifier_dropout) self.classifier = nn.Dense(self.config.num_labels, dtype=self.dtype) def __call__( self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, deterministic: bool = True, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, ): # Model outputs = self.roberta( input_ids, attention_mask, token_type_ids, position_ids, head_mask, deterministic=deterministic, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) hidden_states = outputs[0] hidden_states = self.dropout(hidden_states, deterministic=deterministic) logits = self.classifier(hidden_states) if not return_dict: return (logits,) + outputs[1:] return FlaxTokenClassifierOutput( logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @add_start_docstrings( """ XLM Roberta Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. """, XLM_ROBERTA_START_DOCSTRING, ) class FlaxXLMRobertaForTokenClassification(FlaxXLMRobertaPreTrainedModel): module_class = FlaxXLMRobertaForTokenClassificationModule append_call_sample_docstring( FlaxXLMRobertaForTokenClassification, _CHECKPOINT_FOR_DOC, FlaxTokenClassifierOutput, _CONFIG_FOR_DOC, ) # Copied from transformers.models.bert.modeling_flax_bert.FlaxBertForQuestionAnsweringModule with Bert->XLMRoberta, with self.bert->self.roberta class FlaxXLMRobertaForQuestionAnsweringModule(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 gradient_checkpointing: bool = False def setup(self): self.roberta = FlaxXLMRobertaModule( config=self.config, dtype=self.dtype, add_pooling_layer=False, gradient_checkpointing=self.gradient_checkpointing, ) self.qa_outputs = nn.Dense(self.config.num_labels, dtype=self.dtype) def __call__( self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, deterministic: bool = True, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, ): # Model outputs = self.roberta( input_ids, attention_mask, token_type_ids, position_ids, head_mask, deterministic=deterministic, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) hidden_states = outputs[0] logits = self.qa_outputs(hidden_states) start_logits, end_logits = jnp.split(logits, self.config.num_labels, axis=-1) start_logits = start_logits.squeeze(-1) end_logits = end_logits.squeeze(-1) if not return_dict: return (start_logits, end_logits) + outputs[1:] return FlaxQuestionAnsweringModelOutput( start_logits=start_logits, end_logits=end_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @add_start_docstrings( """ XLM Roberta Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). """, XLM_ROBERTA_START_DOCSTRING, ) class FlaxXLMRobertaForQuestionAnswering(FlaxXLMRobertaPreTrainedModel): module_class = FlaxXLMRobertaForQuestionAnsweringModule append_call_sample_docstring( FlaxXLMRobertaForQuestionAnswering, _CHECKPOINT_FOR_DOC, FlaxQuestionAnsweringModelOutput, _CONFIG_FOR_DOC, ) # Copied from transformers.models.roberta.modeling_flax_roberta.FlaxRobertaForCausalLMModule with Roberta->XLMRoberta class FlaxXLMRobertaForCausalLMModule(nn.Module): config: XLMRobertaConfig dtype: jnp.dtype = jnp.float32 gradient_checkpointing: bool = False def setup(self): self.roberta = FlaxXLMRobertaModule( config=self.config, add_pooling_layer=False, dtype=self.dtype, gradient_checkpointing=self.gradient_checkpointing, ) self.lm_head = FlaxXLMRobertaLMHead(config=self.config, dtype=self.dtype) def __call__( self, input_ids, attention_mask, position_ids, token_type_ids: Optional[jnp.ndarray] = None, head_mask: Optional[jnp.ndarray] = None, encoder_hidden_states: Optional[jnp.ndarray] = None, encoder_attention_mask: Optional[jnp.ndarray] = None, init_cache: bool = False, deterministic: bool = True, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, ): # Model outputs = self.roberta( input_ids, attention_mask, token_type_ids, position_ids, head_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, init_cache=init_cache, deterministic=deterministic, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) hidden_states = outputs[0] if self.config.tie_word_embeddings: shared_embedding = self.roberta.variables["params"]["embeddings"]["word_embeddings"]["embedding"] else: shared_embedding = None # Compute the prediction scores logits = self.lm_head(hidden_states, shared_embedding=shared_embedding) if not return_dict: return (logits,) + outputs[1:] return FlaxCausalLMOutputWithCrossAttentions( logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, cross_attentions=outputs.cross_attentions, ) @add_start_docstrings( """ XLM Roberta Model with a language modeling head on top (a linear layer on top of the hidden-states output) e.g for autoregressive tasks. """, XLM_ROBERTA_START_DOCSTRING, ) # Copied from transformers.models.roberta.modeling_flax_roberta.FlaxRobertaForCausalLM with Roberta->XLMRoberta class FlaxXLMRobertaForCausalLM(FlaxXLMRobertaPreTrainedModel): module_class = FlaxXLMRobertaForCausalLMModule def prepare_inputs_for_generation(self, input_ids, max_length, attention_mask: Optional[jax.Array] = None): # initializing the cache batch_size, seq_length = input_ids.shape past_key_values = self.init_cache(batch_size, max_length) # Note that usually one would have to put 0's in the attention_mask for x > input_ids.shape[-1] and x < cache_length. # But since the decoder uses a causal mask, those positions are masked anyway. # Thus, we can create a single static attention_mask here, which is more efficient for compilation extended_attention_mask = jnp.ones((batch_size, max_length), dtype="i4") if attention_mask is not None: position_ids = attention_mask.cumsum(axis=-1) - 1 extended_attention_mask = lax.dynamic_update_slice(extended_attention_mask, attention_mask, (0, 0)) else: position_ids = jnp.broadcast_to(jnp.arange(seq_length, dtype="i4")[None, :], (batch_size, seq_length)) return { "past_key_values": past_key_values, "attention_mask": extended_attention_mask, "position_ids": position_ids, } def update_inputs_for_generation(self, model_outputs, model_kwargs): model_kwargs["past_key_values"] = model_outputs.past_key_values model_kwargs["position_ids"] = model_kwargs["position_ids"][:, -1:] + 1 return model_kwargs append_call_sample_docstring( FlaxXLMRobertaForCausalLM, _CHECKPOINT_FOR_DOC, FlaxCausalLMOutputWithCrossAttentions, _CONFIG_FOR_DOC, ) __all__ = [ "FlaxXLMRobertaForMaskedLM", "FlaxXLMRobertaForCausalLM", "FlaxXLMRobertaForMultipleChoice", "FlaxXLMRobertaForQuestionAnswering", "FlaxXLMRobertaForSequenceClassification", "FlaxXLMRobertaForTokenClassification", "FlaxXLMRobertaModel", "FlaxXLMRobertaPreTrainedModel", ]
transformers/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py/0
{ "file_path": "transformers/src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py", "repo_id": "transformers", "token_count": 26118 }
558
# coding=utf-8 # Copyright 2022 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """YOSO model configuration""" from ...configuration_utils import PretrainedConfig from ...utils import logging logger = logging.get_logger(__name__) class YosoConfig(PretrainedConfig): r""" This is the configuration class to store the configuration of a [`YosoModel`]. It is used to instantiate an YOSO model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the YOSO [uw-madison/yoso-4096](https://huggingface.co/uw-madison/yoso-4096) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 50265): Vocabulary size of the YOSO model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`YosoModel`]. hidden_size (`int`, *optional*, defaults to 768): Dimension of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (`int`, *optional*, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (`int`, *optional*, defaults to 2): The vocabulary size of the `token_type_ids` passed when calling [`YosoModel`]. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. position_embedding_type (`str`, *optional*, defaults to `"absolute"`): Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. use_expectation (`bool`, *optional*, defaults to `True`): Whether or not to use YOSO Expectation. Overrides any effect of num_hash. hash_code_len (`int`, *optional*, defaults to 9): The length of hashes generated by the hash functions. num_hash (`int`, *optional*, defaults to 64): Number of hash functions used in [`YosoSelfAttention`]. conv_window (`int`, *optional*): Kernel size of depth-wise convolution. use_fast_hash (`bool`, *optional*, defaults to `False`): Whether or not to use custom cuda kernels which perform fast random projection via hadamard transform. lsh_backward (`bool`, *optional*, defaults to `True`): Whether or not to perform backpropagation using Locality Sensitive Hashing. Example: ```python >>> from transformers import YosoConfig, YosoModel >>> # Initializing a YOSO uw-madison/yoso-4096 style configuration >>> configuration = YosoConfig() >>> # Initializing a model (with random weights) from the uw-madison/yoso-4096 style configuration >>> model = YosoModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```""" model_type = "yoso" def __init__( self, vocab_size=50265, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=4096, type_vocab_size=1, initializer_range=0.02, layer_norm_eps=1e-12, position_embedding_type="absolute", use_expectation=True, hash_code_len=9, num_hash=64, conv_window=None, use_fast_hash=True, lsh_backward=True, pad_token_id=1, bos_token_id=0, eos_token_id=2, **kwargs, ): super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs) self.vocab_size = vocab_size self.max_position_embeddings = max_position_embeddings self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.initializer_range = initializer_range self.type_vocab_size = type_vocab_size self.layer_norm_eps = layer_norm_eps self.position_embedding_type = position_embedding_type self.use_expectation = use_expectation self.hash_code_len = hash_code_len self.num_hash = num_hash self.conv_window = conv_window self.use_fast_hash = use_fast_hash self.lsh_backward = lsh_backward __all__ = ["YosoConfig"]
transformers/src/transformers/models/yoso/configuration_yoso.py/0
{ "file_path": "transformers/src/transformers/models/yoso/configuration_yoso.py", "repo_id": "transformers", "token_count": 2554 }
559
from typing import Any, Union, overload import numpy as np from ..utils import add_end_docstrings, is_tf_available, is_torch_available, logging from .base import GenericTensor, Pipeline, PipelineException, build_pipeline_init_args if is_tf_available(): import tensorflow as tf from ..tf_utils import stable_softmax if is_torch_available(): import torch logger = logging.get_logger(__name__) @add_end_docstrings( build_pipeline_init_args(has_tokenizer=True), r""" top_k (`int`, *optional*, defaults to 5): The number of predictions to return. targets (`str` or `list[str]`, *optional*): When passed, the model will limit the scores to the passed targets instead of looking up in the whole vocab. If the provided targets are not in the model vocab, they will be tokenized and the first resulting token will be used (with a warning, and that might be slower). tokenizer_kwargs (`dict`, *optional*): Additional dictionary of keyword arguments passed along to the tokenizer.""", ) class FillMaskPipeline(Pipeline): _load_processor = False _load_image_processor = False _load_feature_extractor = False _load_tokenizer = True """ Masked language modeling prediction pipeline using any `ModelWithLMHead`. See the [masked language modeling examples](../task_summary#masked-language-modeling) for more information. Example: ```python >>> from transformers import pipeline >>> fill_masker = pipeline(model="google-bert/bert-base-uncased") >>> fill_masker("This is a simple [MASK].") [{'score': 0.042, 'token': 3291, 'token_str': 'problem', 'sequence': 'this is a simple problem.'}, {'score': 0.031, 'token': 3160, 'token_str': 'question', 'sequence': 'this is a simple question.'}, {'score': 0.03, 'token': 8522, 'token_str': 'equation', 'sequence': 'this is a simple equation.'}, {'score': 0.027, 'token': 2028, 'token_str': 'one', 'sequence': 'this is a simple one.'}, {'score': 0.024, 'token': 3627, 'token_str': 'rule', 'sequence': 'this is a simple rule.'}] ``` Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) This mask filling pipeline can currently be loaded from [`pipeline`] using the following task identifier: `"fill-mask"`. The models that this pipeline can use are models that have been trained with a masked language modeling objective, which includes the bi-directional models in the library. See the up-to-date list of available models on [huggingface.co/models](https://huggingface.co/models?filter=fill-mask). <Tip> This pipeline only works for inputs with exactly one token masked. Experimental: We added support for multiple masks. The returned values are raw model output, and correspond to disjoint probabilities where one might expect joint probabilities (See [discussion](https://github.com/huggingface/transformers/pull/10222)). </Tip> <Tip> This pipeline now supports tokenizer_kwargs. For example try: ```python >>> from transformers import pipeline >>> fill_masker = pipeline(model="google-bert/bert-base-uncased") >>> tokenizer_kwargs = {"truncation": True} >>> fill_masker( ... "This is a simple [MASK]. " + "...with a large amount of repeated text appended. " * 100, ... tokenizer_kwargs=tokenizer_kwargs, ... ) ``` </Tip> """ def get_masked_index(self, input_ids: GenericTensor) -> np.ndarray: if self.framework == "tf": masked_index = tf.where(input_ids == self.tokenizer.mask_token_id).numpy() elif self.framework == "pt": masked_index = torch.nonzero(input_ids == self.tokenizer.mask_token_id, as_tuple=False) else: raise ValueError("Unsupported framework") return masked_index def _ensure_exactly_one_mask_token(self, input_ids: GenericTensor) -> np.ndarray: masked_index = self.get_masked_index(input_ids) numel = np.prod(masked_index.shape) if numel < 1: raise PipelineException( "fill-mask", self.model.base_model_prefix, f"No mask_token ({self.tokenizer.mask_token}) found on the input", ) def ensure_exactly_one_mask_token(self, model_inputs: GenericTensor): if isinstance(model_inputs, list): for model_input in model_inputs: self._ensure_exactly_one_mask_token(model_input["input_ids"][0]) else: for input_ids in model_inputs["input_ids"]: self._ensure_exactly_one_mask_token(input_ids) def preprocess( self, inputs, return_tensors=None, tokenizer_kwargs=None, **preprocess_parameters ) -> dict[str, GenericTensor]: if return_tensors is None: return_tensors = self.framework if tokenizer_kwargs is None: tokenizer_kwargs = {} model_inputs = self.tokenizer(inputs, return_tensors=return_tensors, **tokenizer_kwargs) self.ensure_exactly_one_mask_token(model_inputs) return model_inputs def _forward(self, model_inputs): model_outputs = self.model(**model_inputs) model_outputs["input_ids"] = model_inputs["input_ids"] return model_outputs def postprocess(self, model_outputs, top_k=5, target_ids=None): # Cap top_k if there are targets if target_ids is not None and target_ids.shape[0] < top_k: top_k = target_ids.shape[0] input_ids = model_outputs["input_ids"][0] outputs = model_outputs["logits"] if self.framework == "tf": masked_index = tf.where(input_ids == self.tokenizer.mask_token_id).numpy()[:, 0] outputs = outputs.numpy() logits = outputs[0, masked_index, :] probs = stable_softmax(logits, axis=-1) if target_ids is not None: probs = tf.gather_nd(tf.squeeze(probs, 0), target_ids.reshape(-1, 1)) probs = tf.expand_dims(probs, 0) topk = tf.math.top_k(probs, k=top_k) values, predictions = topk.values.numpy(), topk.indices.numpy() else: masked_index = torch.nonzero(input_ids == self.tokenizer.mask_token_id, as_tuple=False).squeeze(-1) # Fill mask pipeline supports only one ${mask_token} per sample logits = outputs[0, masked_index, :] probs = logits.softmax(dim=-1) if target_ids is not None: probs = probs[..., target_ids] values, predictions = probs.topk(top_k) result = [] single_mask = values.shape[0] == 1 for i, (_values, _predictions) in enumerate(zip(values.tolist(), predictions.tolist())): row = [] for v, p in zip(_values, _predictions): # Copy is important since we're going to modify this array in place tokens = input_ids.numpy().copy() if target_ids is not None: p = target_ids[p].tolist() tokens[masked_index[i]] = p # Filter padding out: tokens = tokens[np.where(tokens != self.tokenizer.pad_token_id)] # Originally we skip special tokens to give readable output. # For multi masks though, the other [MASK] would be removed otherwise # making the output look odd, so we add them back sequence = self.tokenizer.decode(tokens, skip_special_tokens=single_mask) proposition = {"score": v, "token": p, "token_str": self.tokenizer.decode([p]), "sequence": sequence} row.append(proposition) result.append(row) if single_mask: return result[0] return result def get_target_ids(self, targets, top_k=None): if isinstance(targets, str): targets = [targets] try: vocab = self.tokenizer.get_vocab() except Exception: vocab = {} target_ids = [] for target in targets: id_ = vocab.get(target) if id_ is None: input_ids = self.tokenizer( target, add_special_tokens=False, return_attention_mask=False, return_token_type_ids=False, max_length=1, truncation=True, )["input_ids"] if len(input_ids) == 0: logger.warning( f"The specified target token `{target}` does not exist in the model vocabulary. " "We cannot replace it with anything meaningful, ignoring it" ) continue id_ = input_ids[0] # XXX: If users encounter this pass # it becomes pretty slow, so let's make sure # The warning enables them to fix the input to # get faster performance. logger.warning( f"The specified target token `{target}` does not exist in the model vocabulary. " f"Replacing with `{self.tokenizer.convert_ids_to_tokens(id_)}`." ) target_ids.append(id_) target_ids = list(set(target_ids)) if len(target_ids) == 0: raise ValueError("At least one target must be provided when passed.") target_ids = np.array(target_ids) return target_ids def _sanitize_parameters(self, top_k=None, targets=None, tokenizer_kwargs=None): preprocess_params = {} if tokenizer_kwargs is not None: preprocess_params["tokenizer_kwargs"] = tokenizer_kwargs postprocess_params = {} if targets is not None: target_ids = self.get_target_ids(targets, top_k) postprocess_params["target_ids"] = target_ids if top_k is not None: postprocess_params["top_k"] = top_k if self.tokenizer.mask_token_id is None: raise PipelineException( "fill-mask", self.model.base_model_prefix, "The tokenizer does not define a `mask_token`." ) return preprocess_params, {}, postprocess_params @overload def __call__(self, inputs: str, **kwargs: Any) -> list[dict[str, Any]]: ... @overload def __call__(self, inputs: list[str], **kwargs: Any) -> list[list[dict[str, Any]]]: ... def __call__( self, inputs: Union[str, list[str]], **kwargs: Any ) -> Union[list[dict[str, Any]], list[list[dict[str, Any]]]]: """ Fill the masked token in the text(s) given as inputs. Args: inputs (`str` or `list[str]`): One or several texts (or one list of prompts) with masked tokens. targets (`str` or `list[str]`, *optional*): When passed, the model will limit the scores to the passed targets instead of looking up in the whole vocab. If the provided targets are not in the model vocab, they will be tokenized and the first resulting token will be used (with a warning, and that might be slower). top_k (`int`, *optional*): When passed, overrides the number of predictions to return. Return: A list or a list of list of `dict`: Each result comes as list of dictionaries with the following keys: - **sequence** (`str`) -- The corresponding input with the mask token prediction. - **score** (`float`) -- The corresponding probability. - **token** (`int`) -- The predicted token id (to replace the masked one). - **token_str** (`str`) -- The predicted token (to replace the masked one). """ outputs = super().__call__(inputs, **kwargs) if isinstance(inputs, list) and len(inputs) == 1: return outputs[0] return outputs
transformers/src/transformers/pipelines/fill_mask.py/0
{ "file_path": "transformers/src/transformers/pipelines/fill_mask.py", "repo_id": "transformers", "token_count": 5175 }
560
import types import warnings from typing import Any, Optional, Union, overload import numpy as np from ..models.bert.tokenization_bert import BasicTokenizer from ..utils import ( ExplicitEnum, add_end_docstrings, is_tf_available, is_torch_available, ) from .base import ArgumentHandler, ChunkPipeline, Dataset, build_pipeline_init_args if is_tf_available(): import tensorflow as tf from ..models.auto.modeling_tf_auto import TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES if is_torch_available(): import torch from ..models.auto.modeling_auto import MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES class TokenClassificationArgumentHandler(ArgumentHandler): """ Handles arguments for token classification. """ def __call__(self, inputs: Union[str, list[str]], **kwargs): is_split_into_words = kwargs.get("is_split_into_words", False) delimiter = kwargs.get("delimiter") if inputs is not None and isinstance(inputs, (list, tuple)) and len(inputs) > 0: inputs = list(inputs) batch_size = len(inputs) elif isinstance(inputs, str): inputs = [inputs] batch_size = 1 elif Dataset is not None and isinstance(inputs, Dataset) or isinstance(inputs, types.GeneratorType): return inputs, is_split_into_words, None, delimiter else: raise ValueError("At least one input is required.") offset_mapping = kwargs.get("offset_mapping") if offset_mapping: if isinstance(offset_mapping, list) and isinstance(offset_mapping[0], tuple): offset_mapping = [offset_mapping] if len(offset_mapping) != batch_size: raise ValueError("offset_mapping should have the same batch size as the input") return inputs, is_split_into_words, offset_mapping, delimiter class AggregationStrategy(ExplicitEnum): """All the valid aggregation strategies for TokenClassificationPipeline""" NONE = "none" SIMPLE = "simple" FIRST = "first" AVERAGE = "average" MAX = "max" @add_end_docstrings( build_pipeline_init_args(has_tokenizer=True), r""" ignore_labels (`list[str]`, defaults to `["O"]`): A list of labels to ignore. grouped_entities (`bool`, *optional*, defaults to `False`): DEPRECATED, use `aggregation_strategy` instead. Whether or not to group the tokens corresponding to the same entity together in the predictions or not. stride (`int`, *optional*): If stride is provided, the pipeline is applied on all the text. The text is split into chunks of size model_max_length. Works only with fast tokenizers and `aggregation_strategy` different from `NONE`. The value of this argument defines the number of overlapping tokens between chunks. In other words, the model will shift forward by `tokenizer.model_max_length - stride` tokens each step. aggregation_strategy (`str`, *optional*, defaults to `"none"`): The strategy to fuse (or not) tokens based on the model prediction. - "none" : Will simply not do any aggregation and simply return raw results from the model - "simple" : Will attempt to group entities following the default schema. (A, B-TAG), (B, I-TAG), (C, I-TAG), (D, B-TAG2) (E, B-TAG2) will end up being [{"word": ABC, "entity": "TAG"}, {"word": "D", "entity": "TAG2"}, {"word": "E", "entity": "TAG2"}] Notice that two consecutive B tags will end up as different entities. On word based languages, we might end up splitting words undesirably : Imagine Microsoft being tagged as [{"word": "Micro", "entity": "ENTERPRISE"}, {"word": "soft", "entity": "NAME"}]. Look for FIRST, MAX, AVERAGE for ways to mitigate that and disambiguate words (on languages that support that meaning, which is basically tokens separated by a space). These mitigations will only work on real words, "New york" might still be tagged with two different entities. - "first" : (works only on word based models) Will use the `SIMPLE` strategy except that words, cannot end up with different tags. Words will simply use the tag of the first token of the word when there is ambiguity. - "average" : (works only on word based models) Will use the `SIMPLE` strategy except that words, cannot end up with different tags. scores will be averaged first across tokens, and then the maximum label is applied. - "max" : (works only on word based models) Will use the `SIMPLE` strategy except that words, cannot end up with different tags. Word entity will simply be the token with the maximum score.""", ) class TokenClassificationPipeline(ChunkPipeline): """ Named Entity Recognition pipeline using any `ModelForTokenClassification`. See the [named entity recognition examples](../task_summary#named-entity-recognition) for more information. Example: ```python >>> from transformers import pipeline >>> token_classifier = pipeline(model="Jean-Baptiste/camembert-ner", aggregation_strategy="simple") >>> sentence = "Je m'appelle jean-baptiste et je vis à montréal" >>> tokens = token_classifier(sentence) >>> tokens [{'entity_group': 'PER', 'score': 0.9931, 'word': 'jean-baptiste', 'start': 12, 'end': 26}, {'entity_group': 'LOC', 'score': 0.998, 'word': 'montréal', 'start': 38, 'end': 47}] >>> token = tokens[0] >>> # Start and end provide an easy way to highlight words in the original text. >>> sentence[token["start"] : token["end"]] ' jean-baptiste' >>> # Some models use the same idea to do part of speech. >>> syntaxer = pipeline(model="vblagoje/bert-english-uncased-finetuned-pos", aggregation_strategy="simple") >>> syntaxer("My name is Sarah and I live in London") [{'entity_group': 'PRON', 'score': 0.999, 'word': 'my', 'start': 0, 'end': 2}, {'entity_group': 'NOUN', 'score': 0.997, 'word': 'name', 'start': 3, 'end': 7}, {'entity_group': 'AUX', 'score': 0.994, 'word': 'is', 'start': 8, 'end': 10}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'sarah', 'start': 11, 'end': 16}, {'entity_group': 'CCONJ', 'score': 0.999, 'word': 'and', 'start': 17, 'end': 20}, {'entity_group': 'PRON', 'score': 0.999, 'word': 'i', 'start': 21, 'end': 22}, {'entity_group': 'VERB', 'score': 0.998, 'word': 'live', 'start': 23, 'end': 27}, {'entity_group': 'ADP', 'score': 0.999, 'word': 'in', 'start': 28, 'end': 30}, {'entity_group': 'PROPN', 'score': 0.999, 'word': 'london', 'start': 31, 'end': 37}] ``` Learn more about the basics of using a pipeline in the [pipeline tutorial](../pipeline_tutorial) This token recognition pipeline can currently be loaded from [`pipeline`] using the following task identifier: `"ner"` (for predicting the classes of tokens in a sequence: person, organisation, location or miscellaneous). The models that this pipeline can use are models that have been fine-tuned on a token classification task. See the up-to-date list of available models on [huggingface.co/models](https://huggingface.co/models?filter=token-classification). """ default_input_names = "sequences" _load_processor = False _load_image_processor = False _load_feature_extractor = False _load_tokenizer = True def __init__(self, args_parser=TokenClassificationArgumentHandler(), *args, **kwargs): super().__init__(*args, **kwargs) self.check_model_type( TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES if self.framework == "tf" else MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES ) self._basic_tokenizer = BasicTokenizer(do_lower_case=False) self._args_parser = args_parser def _sanitize_parameters( self, ignore_labels=None, grouped_entities: Optional[bool] = None, ignore_subwords: Optional[bool] = None, aggregation_strategy: Optional[AggregationStrategy] = None, offset_mapping: Optional[list[tuple[int, int]]] = None, is_split_into_words: Optional[bool] = False, stride: Optional[int] = None, delimiter: Optional[str] = None, ): preprocess_params = {} preprocess_params["is_split_into_words"] = is_split_into_words if is_split_into_words: preprocess_params["delimiter"] = " " if delimiter is None else delimiter if offset_mapping is not None: preprocess_params["offset_mapping"] = offset_mapping postprocess_params = {} if grouped_entities is not None or ignore_subwords is not None: if grouped_entities and ignore_subwords: aggregation_strategy = AggregationStrategy.FIRST elif grouped_entities and not ignore_subwords: aggregation_strategy = AggregationStrategy.SIMPLE else: aggregation_strategy = AggregationStrategy.NONE if grouped_entities is not None: warnings.warn( "`grouped_entities` is deprecated and will be removed in version v5.0.0, defaulted to" f' `aggregation_strategy="{aggregation_strategy}"` instead.' ) if ignore_subwords is not None: warnings.warn( "`ignore_subwords` is deprecated and will be removed in version v5.0.0, defaulted to" f' `aggregation_strategy="{aggregation_strategy}"` instead.' ) if aggregation_strategy is not None: if isinstance(aggregation_strategy, str): aggregation_strategy = AggregationStrategy[aggregation_strategy.upper()] if ( aggregation_strategy in {AggregationStrategy.FIRST, AggregationStrategy.MAX, AggregationStrategy.AVERAGE} and not self.tokenizer.is_fast ): raise ValueError( "Slow tokenizers cannot handle subwords. Please set the `aggregation_strategy` option" ' to `"simple"` or use a fast tokenizer.' ) postprocess_params["aggregation_strategy"] = aggregation_strategy if ignore_labels is not None: postprocess_params["ignore_labels"] = ignore_labels if stride is not None: if stride >= self.tokenizer.model_max_length: raise ValueError( "`stride` must be less than `tokenizer.model_max_length` (or even lower if the tokenizer adds special tokens)" ) if aggregation_strategy == AggregationStrategy.NONE: raise ValueError( "`stride` was provided to process all the text but `aggregation_strategy=" f'"{aggregation_strategy}"`, please select another one instead.' ) else: if self.tokenizer.is_fast: tokenizer_params = { "return_overflowing_tokens": True, "padding": True, "stride": stride, } preprocess_params["tokenizer_params"] = tokenizer_params else: raise ValueError( "`stride` was provided to process all the text but you're using a slow tokenizer." " Please use a fast tokenizer." ) return preprocess_params, {}, postprocess_params @overload def __call__(self, inputs: str, **kwargs: Any) -> list[dict[str, str]]: ... @overload def __call__(self, inputs: list[str], **kwargs: Any) -> list[list[dict[str, str]]]: ... def __call__( self, inputs: Union[str, list[str]], **kwargs: Any ) -> Union[list[dict[str, str]], list[list[dict[str, str]]]]: """ Classify each token of the text(s) given as inputs. Args: inputs (`str` or `List[str]`): One or several texts (or one list of texts) for token classification. Can be pre-tokenized when `is_split_into_words=True`. Return: A list or a list of list of `dict`: Each result comes as a list of dictionaries (one for each token in the corresponding input, or each entity if this pipeline was instantiated with an aggregation_strategy) with the following keys: - **word** (`str`) -- The token/word classified. This is obtained by decoding the selected tokens. If you want to have the exact string in the original sentence, use `start` and `end`. - **score** (`float`) -- The corresponding probability for `entity`. - **entity** (`str`) -- The entity predicted for that token/word (it is named *entity_group* when *aggregation_strategy* is not `"none"`. - **index** (`int`, only present when `aggregation_strategy="none"`) -- The index of the corresponding token in the sentence. - **start** (`int`, *optional*) -- The index of the start of the corresponding entity in the sentence. Only exists if the offsets are available within the tokenizer - **end** (`int`, *optional*) -- The index of the end of the corresponding entity in the sentence. Only exists if the offsets are available within the tokenizer """ _inputs, is_split_into_words, offset_mapping, delimiter = self._args_parser(inputs, **kwargs) kwargs["is_split_into_words"] = is_split_into_words kwargs["delimiter"] = delimiter if is_split_into_words and not all(isinstance(input, list) for input in inputs): return super().__call__([inputs], **kwargs) if offset_mapping: kwargs["offset_mapping"] = offset_mapping return super().__call__(inputs, **kwargs) def preprocess(self, sentence, offset_mapping=None, **preprocess_params): tokenizer_params = preprocess_params.pop("tokenizer_params", {}) truncation = self.tokenizer.model_max_length and self.tokenizer.model_max_length > 0 word_to_chars_map = None is_split_into_words = preprocess_params["is_split_into_words"] if is_split_into_words: delimiter = preprocess_params["delimiter"] if not isinstance(sentence, list): raise ValueError("When `is_split_into_words=True`, `sentence` must be a list of tokens.") words = sentence sentence = delimiter.join(words) # Recreate the sentence string for later display and slicing # This map will allows to convert back word => char indices word_to_chars_map = [] delimiter_len = len(delimiter) char_offset = 0 for word in words: word_to_chars_map.append((char_offset, char_offset + len(word))) char_offset += len(word) + delimiter_len # We use `words` as the actual input for the tokenizer text_to_tokenize = words tokenizer_params["is_split_into_words"] = True else: if not isinstance(sentence, str): raise ValueError("When `is_split_into_words=False`, `sentence` must be an untokenized string.") text_to_tokenize = sentence inputs = self.tokenizer( text_to_tokenize, return_tensors=self.framework, truncation=truncation, return_special_tokens_mask=True, return_offsets_mapping=self.tokenizer.is_fast, **tokenizer_params, ) if is_split_into_words and not self.tokenizer.is_fast: raise ValueError("is_split_into_words=True is only supported with fast tokenizers.") inputs.pop("overflow_to_sample_mapping", None) num_chunks = len(inputs["input_ids"]) for i in range(num_chunks): if self.framework == "tf": model_inputs = {k: tf.expand_dims(v[i], 0) for k, v in inputs.items()} else: model_inputs = {k: v[i].unsqueeze(0) for k, v in inputs.items()} if offset_mapping is not None: model_inputs["offset_mapping"] = offset_mapping model_inputs["sentence"] = sentence if i == 0 else None model_inputs["is_last"] = i == num_chunks - 1 if word_to_chars_map is not None: model_inputs["word_ids"] = inputs.word_ids(i) model_inputs["word_to_chars_map"] = word_to_chars_map yield model_inputs def _forward(self, model_inputs): # Forward special_tokens_mask = model_inputs.pop("special_tokens_mask") offset_mapping = model_inputs.pop("offset_mapping", None) sentence = model_inputs.pop("sentence") is_last = model_inputs.pop("is_last") word_ids = model_inputs.pop("word_ids", None) word_to_chars_map = model_inputs.pop("word_to_chars_map", None) if self.framework == "tf": logits = self.model(**model_inputs)[0] else: output = self.model(**model_inputs) logits = output["logits"] if isinstance(output, dict) else output[0] return { "logits": logits, "special_tokens_mask": special_tokens_mask, "offset_mapping": offset_mapping, "sentence": sentence, "is_last": is_last, "word_ids": word_ids, "word_to_chars_map": word_to_chars_map, **model_inputs, } def postprocess(self, all_outputs, aggregation_strategy=AggregationStrategy.NONE, ignore_labels=None): if ignore_labels is None: ignore_labels = ["O"] all_entities = [] # Get map from the first output, it's the same for all chunks word_to_chars_map = all_outputs[0].get("word_to_chars_map") for model_outputs in all_outputs: if self.framework == "pt" and model_outputs["logits"][0].dtype in (torch.bfloat16, torch.float16): logits = model_outputs["logits"][0].to(torch.float32).numpy() else: logits = model_outputs["logits"][0].numpy() sentence = all_outputs[0]["sentence"] input_ids = model_outputs["input_ids"][0] offset_mapping = ( model_outputs["offset_mapping"][0] if model_outputs["offset_mapping"] is not None else None ) special_tokens_mask = model_outputs["special_tokens_mask"][0].numpy() word_ids = model_outputs.get("word_ids") maxes = np.max(logits, axis=-1, keepdims=True) shifted_exp = np.exp(logits - maxes) scores = shifted_exp / shifted_exp.sum(axis=-1, keepdims=True) if self.framework == "tf": input_ids = input_ids.numpy() offset_mapping = offset_mapping.numpy() if offset_mapping is not None else None pre_entities = self.gather_pre_entities( sentence, input_ids, scores, offset_mapping, special_tokens_mask, aggregation_strategy, word_ids=word_ids, word_to_chars_map=word_to_chars_map, ) grouped_entities = self.aggregate(pre_entities, aggregation_strategy) # Filter anything that is in self.ignore_labels entities = [ entity for entity in grouped_entities if entity.get("entity", None) not in ignore_labels and entity.get("entity_group", None) not in ignore_labels ] all_entities.extend(entities) num_chunks = len(all_outputs) if num_chunks > 1: all_entities = self.aggregate_overlapping_entities(all_entities) return all_entities def aggregate_overlapping_entities(self, entities): if len(entities) == 0: return entities entities = sorted(entities, key=lambda x: x["start"]) aggregated_entities = [] previous_entity = entities[0] for entity in entities: if previous_entity["start"] <= entity["start"] < previous_entity["end"]: current_length = entity["end"] - entity["start"] previous_length = previous_entity["end"] - previous_entity["start"] if ( current_length > previous_length or current_length == previous_length and entity["score"] > previous_entity["score"] ): previous_entity = entity else: aggregated_entities.append(previous_entity) previous_entity = entity aggregated_entities.append(previous_entity) return aggregated_entities def gather_pre_entities( self, sentence: str, input_ids: np.ndarray, scores: np.ndarray, offset_mapping: Optional[list[tuple[int, int]]], special_tokens_mask: np.ndarray, aggregation_strategy: AggregationStrategy, word_ids: Optional[list[Optional[int]]] = None, word_to_chars_map: Optional[list[tuple[int, int]]] = None, ) -> list[dict]: """Fuse various numpy arrays into dicts with all the information needed for aggregation""" pre_entities = [] for idx, token_scores in enumerate(scores): # Filter special_tokens if special_tokens_mask[idx]: continue word = self.tokenizer.convert_ids_to_tokens(int(input_ids[idx])) if offset_mapping is not None: start_ind, end_ind = offset_mapping[idx] # If the input is pre-tokenized, we need to rescale the offsets to the absolute sentence. if word_ids is not None and word_to_chars_map is not None: word_index = word_ids[idx] if word_index is not None: start_char, _ = word_to_chars_map[word_index] start_ind += start_char end_ind += start_char if not isinstance(start_ind, int): if self.framework == "pt": start_ind = start_ind.item() end_ind = end_ind.item() word_ref = sentence[start_ind:end_ind] if getattr(self.tokenizer, "_tokenizer", None) and getattr( self.tokenizer._tokenizer.model, "continuing_subword_prefix", None ): # This is a BPE, word aware tokenizer, there is a correct way # to fuse tokens is_subword = len(word) != len(word_ref) else: # This is a fallback heuristic. This will fail most likely on any kind of text + punctuation mixtures that will be considered "words". Non word aware models cannot do better than this unfortunately. if aggregation_strategy in { AggregationStrategy.FIRST, AggregationStrategy.AVERAGE, AggregationStrategy.MAX, }: warnings.warn( "Tokenizer does not support real words, using fallback heuristic", UserWarning, ) is_subword = start_ind > 0 and " " not in sentence[start_ind - 1 : start_ind + 1] if int(input_ids[idx]) == self.tokenizer.unk_token_id: word = word_ref is_subword = False else: start_ind = None end_ind = None is_subword = False pre_entity = { "word": word, "scores": token_scores, "start": start_ind, "end": end_ind, "index": idx, "is_subword": is_subword, } pre_entities.append(pre_entity) return pre_entities def aggregate(self, pre_entities: list[dict], aggregation_strategy: AggregationStrategy) -> list[dict]: if aggregation_strategy in {AggregationStrategy.NONE, AggregationStrategy.SIMPLE}: entities = [] for pre_entity in pre_entities: entity_idx = pre_entity["scores"].argmax() score = pre_entity["scores"][entity_idx] entity = { "entity": self.model.config.id2label[entity_idx], "score": score, "index": pre_entity["index"], "word": pre_entity["word"], "start": pre_entity["start"], "end": pre_entity["end"], } entities.append(entity) else: entities = self.aggregate_words(pre_entities, aggregation_strategy) if aggregation_strategy == AggregationStrategy.NONE: return entities return self.group_entities(entities) def aggregate_word(self, entities: list[dict], aggregation_strategy: AggregationStrategy) -> dict: word = self.tokenizer.convert_tokens_to_string([entity["word"] for entity in entities]) if aggregation_strategy == AggregationStrategy.FIRST: scores = entities[0]["scores"] idx = scores.argmax() score = scores[idx] entity = self.model.config.id2label[idx] elif aggregation_strategy == AggregationStrategy.MAX: max_entity = max(entities, key=lambda entity: entity["scores"].max()) scores = max_entity["scores"] idx = scores.argmax() score = scores[idx] entity = self.model.config.id2label[idx] elif aggregation_strategy == AggregationStrategy.AVERAGE: scores = np.stack([entity["scores"] for entity in entities]) average_scores = np.nanmean(scores, axis=0) entity_idx = average_scores.argmax() entity = self.model.config.id2label[entity_idx] score = average_scores[entity_idx] else: raise ValueError("Invalid aggregation_strategy") new_entity = { "entity": entity, "score": score, "word": word, "start": entities[0]["start"], "end": entities[-1]["end"], } return new_entity def aggregate_words(self, entities: list[dict], aggregation_strategy: AggregationStrategy) -> list[dict]: """ Override tokens from a given word that disagree to force agreement on word boundaries. Example: micro|soft| com|pany| B-ENT I-NAME I-ENT I-ENT will be rewritten with first strategy as microsoft| company| B-ENT I-ENT """ if aggregation_strategy in { AggregationStrategy.NONE, AggregationStrategy.SIMPLE, }: raise ValueError("NONE and SIMPLE strategies are invalid for word aggregation") word_entities = [] word_group = None for entity in entities: if word_group is None: word_group = [entity] elif entity["is_subword"]: word_group.append(entity) else: word_entities.append(self.aggregate_word(word_group, aggregation_strategy)) word_group = [entity] # Last item if word_group is not None: word_entities.append(self.aggregate_word(word_group, aggregation_strategy)) return word_entities def group_sub_entities(self, entities: list[dict]) -> dict: """ Group together the adjacent tokens with the same entity predicted. Args: entities (`dict`): The entities predicted by the pipeline. """ # Get the first entity in the entity group entity = entities[0]["entity"].split("-", 1)[-1] scores = np.nanmean([entity["score"] for entity in entities]) tokens = [entity["word"] for entity in entities] entity_group = { "entity_group": entity, "score": np.mean(scores), "word": self.tokenizer.convert_tokens_to_string(tokens), "start": entities[0]["start"], "end": entities[-1]["end"], } return entity_group def get_tag(self, entity_name: str) -> tuple[str, str]: if entity_name.startswith("B-"): bi = "B" tag = entity_name[2:] elif entity_name.startswith("I-"): bi = "I" tag = entity_name[2:] else: # It's not in B-, I- format # Default to I- for continuation. bi = "I" tag = entity_name return bi, tag def group_entities(self, entities: list[dict]) -> list[dict]: """ Find and group together the adjacent tokens with the same entity predicted. Args: entities (`dict`): The entities predicted by the pipeline. """ entity_groups = [] entity_group_disagg = [] for entity in entities: if not entity_group_disagg: entity_group_disagg.append(entity) continue # If the current entity is similar and adjacent to the previous entity, # append it to the disaggregated entity group # The split is meant to account for the "B" and "I" prefixes # Shouldn't merge if both entities are B-type bi, tag = self.get_tag(entity["entity"]) last_bi, last_tag = self.get_tag(entity_group_disagg[-1]["entity"]) if tag == last_tag and bi != "B": # Modify subword type to be previous_type entity_group_disagg.append(entity) else: # If the current entity is different from the previous entity # aggregate the disaggregated entity group entity_groups.append(self.group_sub_entities(entity_group_disagg)) entity_group_disagg = [entity] if entity_group_disagg: # it's the last entity, add it to the entity groups entity_groups.append(self.group_sub_entities(entity_group_disagg)) return entity_groups NerPipeline = TokenClassificationPipeline
transformers/src/transformers/pipelines/token_classification.py/0
{ "file_path": "transformers/src/transformers/pipelines/token_classification.py", "repo_id": "transformers", "token_count": 13824 }
561
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import TYPE_CHECKING, Optional, Union from .base import HfQuantizer if TYPE_CHECKING: from ..modeling_utils import PreTrainedModel from ..utils import is_accelerate_available, is_torch_available, logging if is_torch_available(): import torch logger = logging.get_logger(__name__) class BitNetHfQuantizer(HfQuantizer): """ 1.58-bit quantization from BitNet quantization method: Before loading: it converts the linear layers into BitLinear layers during loading. Check out the paper introducing this method: https://huggingface.co/papers/2402.17764 """ requires_parameters_quantization = False requires_calibration = True required_packages = ["accelerate"] def __init__(self, quantization_config, **kwargs): super().__init__(quantization_config, **kwargs) self.quantization_config = quantization_config def validate_environment(self, *args, **kwargs): if not is_accelerate_available(): raise ImportError("Loading a BitNet quantized model requires accelerate (`pip install accelerate`)") if kwargs.get("from_tf", False) or kwargs.get("from_flax", False): raise ValueError( "Loading ternary weights from tf/flax is currently not supported, please make" " sure the weights are in PyTorch format." ) if not torch.cuda.is_available(): logger.warning_once( "You don't have a GPU available to load the model, the inference will be slow because of weight unpacking" ) return device_map = kwargs.get("device_map") if device_map is None: logger.warning_once( "You have loaded a BitNet model on CPU and have a CUDA device available, make sure to set " "your model on a GPU device in order to run your model." ) elif device_map is not None: if isinstance(device_map, dict) and ("cpu" in device_map.values() or "disk" in device_map.values()): raise ValueError( "You are attempting to load a BitNet model with a device_map that contains a CPU or disk device." "This is not supported. Please remove the CPU or disk device from the device_map." ) def _process_model_after_weight_loading(self, model: "PreTrainedModel", **kwargs): return model def _process_model_before_weight_loading( self, model: "PreTrainedModel", keep_in_fp32_modules: Optional[list[str]] = None, **kwargs, ): from ..integrations import replace_with_bitnet_linear self.modules_to_not_convert = self.get_modules_to_not_convert( model, self.quantization_config.modules_to_not_convert, keep_in_fp32_modules ) model = replace_with_bitnet_linear( model, modules_to_not_convert=self.modules_to_not_convert, quantization_config=self.quantization_config, pre_quantized=self.pre_quantized, ) def adjust_max_memory(self, max_memory: dict[str, Union[int, str]]) -> dict[str, Union[int, str]]: max_memory = {key: val * 0.90 for key, val in max_memory.items()} return max_memory def adjust_target_dtype(self, target_dtype: "torch.dtype") -> "torch.dtype": target_dtype = torch.int8 return target_dtype def is_serializable(self, safe_serialization=None): return True @property def is_trainable(self) -> bool: return ( self.quantization_config.linear_class == "autobitlinear" and self.quantization_config.quantization_mode == "online" ) @property def is_qat_trainable(self) -> bool: """Flag indicating whether the quantized model can carry out quantization aware training""" return ( self.quantization_config.linear_class == "autobitlinear" and self.quantization_config.quantization_mode == "online" )
transformers/src/transformers/quantizers/quantizer_bitnet.py/0
{ "file_path": "transformers/src/transformers/quantizers/quantizer_bitnet.py", "repo_id": "transformers", "token_count": 1809 }
562
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import TYPE_CHECKING, Optional from .base import HfQuantizer if TYPE_CHECKING: from ..modeling_utils import PreTrainedModel from ..utils import is_accelerate_available, is_torch_available, is_vptq_available, logging from ..utils.quantization_config import QuantizationConfigMixin if is_torch_available(): import torch logger = logging.get_logger(__name__) class VptqHfQuantizer(HfQuantizer): """ Quantizer of the VPTQ method. Enables the loading of prequantized models. """ requires_calibration = True required_packages = ["vptq"] def __init__(self, quantization_config: QuantizationConfigMixin, **kwargs): super().__init__(quantization_config, **kwargs) self.quantization_config = quantization_config def validate_environment(self, *args, **kwargs): if not is_accelerate_available(): raise ImportError("Using `vptq` quantization requires Accelerate: `pip install accelerate`") if not is_vptq_available(): raise ImportError("Using `vptq` quantization requires VPTQ>=0.0.4: `pip install -U vptq`") def update_dtype(self, dtype: "torch.dtype") -> "torch.dtype": if dtype is None: if torch.cuda.is_available(): dtype = torch.float16 logger.info( "CUDA available. Assuming VPTQ inference on GPU and loading the model in `torch.float16`. To overwrite it, set `dtype` manually." ) else: import vptq device_availability = getattr(vptq, "device_availability", lambda device: False) if device_availability("cpu") is True: raise RuntimeError("No GPU found. Please wait for the next release of VPTQ to use CPU inference") dtype = torch.float32 logger.info("No GPU found. Assuming VPTQ inference on CPU and loading the model in `torch.float32`.") return dtype def _process_model_before_weight_loading( self, model: "PreTrainedModel", keep_in_fp32_modules: Optional[list[str]] = None, **kwargs, ): """ we don't have param like modules_to_not_convert to indicate which layers should not be quantized because `quantization_config` include the layers that should be quantized """ from ..integrations import replace_with_vptq_linear self.modules_to_not_convert = self.get_modules_to_not_convert( model, self.quantization_config.modules_to_not_convert, keep_in_fp32_modules ) replace_with_vptq_linear( model, quantization_config=self.quantization_config, modules_to_not_convert=self.modules_to_not_convert, ) model.config.quantization_config = self.quantization_config def _process_model_after_weight_loading(self, model: "PreTrainedModel", **kwargs): return model @property def is_trainable(self) -> bool: return False def is_serializable(self, safe_serialization=None): return True
transformers/src/transformers/quantizers/quantizer_vptq.py/0
{ "file_path": "transformers/src/transformers/quantizers/quantizer_vptq.py", "repo_id": "transformers", "token_count": 1432 }
563
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import contextlib from copy import deepcopy from pathlib import Path from typing import TYPE_CHECKING, Any, Callable, Optional, Union import torch from torch import nn from torch.distributed.fsdp import FullyShardedDataParallel from torch.utils.data import Dataset from .generation.configuration_utils import GenerationConfig from .integrations.deepspeed import is_deepspeed_zero3_enabled from .integrations.fsdp import is_fsdp_managed_module from .trainer import Trainer from .utils import is_datasets_available, logging from .utils.deprecation import deprecate_kwarg if is_datasets_available(): import datasets if TYPE_CHECKING: from torch.utils.data import IterableDataset from .data.data_collator import DataCollator from .feature_extraction_utils import FeatureExtractionMixin from .image_processing_utils import BaseImageProcessor from .modeling_utils import PreTrainedModel from .processing_utils import ProcessorMixin from .tokenization_utils_base import PreTrainedTokenizerBase from .trainer_callback import TrainerCallback from .trainer_utils import EvalPrediction, PredictionOutput from .training_args import TrainingArguments logger = logging.get_logger(__name__) class Seq2SeqTrainer(Trainer): @deprecate_kwarg("tokenizer", new_name="processing_class", version="5.0.0", raise_if_both_names=True) def __init__( self, model: Union["PreTrainedModel", nn.Module] = None, args: "TrainingArguments" = None, data_collator: Optional["DataCollator"] = None, train_dataset: Optional[Union[Dataset, "IterableDataset", "datasets.Dataset"]] = None, eval_dataset: Optional[Union[Dataset, dict[str, Dataset]]] = None, processing_class: Optional[ Union["PreTrainedTokenizerBase", "BaseImageProcessor", "FeatureExtractionMixin", "ProcessorMixin"] ] = None, model_init: Optional[Callable[[], "PreTrainedModel"]] = None, compute_loss_func: Optional[Callable] = None, compute_metrics: Optional[Callable[["EvalPrediction"], dict]] = None, callbacks: Optional[list["TrainerCallback"]] = None, optimizers: tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None), preprocess_logits_for_metrics: Optional[Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None, ): super().__init__( model=model, args=args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset, processing_class=processing_class, model_init=model_init, compute_loss_func=compute_loss_func, compute_metrics=compute_metrics, callbacks=callbacks, optimizers=optimizers, preprocess_logits_for_metrics=preprocess_logits_for_metrics, ) # Override self.model.generation_config if a GenerationConfig is specified in args. # Priority: args.generation_config > model.generation_config > default GenerationConfig. if self.args.generation_config is not None: gen_config = self.load_generation_config(self.args.generation_config) self.model.generation_config = gen_config @staticmethod def load_generation_config(gen_config_arg: Union[str, GenerationConfig]) -> GenerationConfig: """ Loads a `~generation.GenerationConfig` from the `Seq2SeqTrainingArguments.generation_config` arguments. Args: gen_config_arg (`str` or [`~generation.GenerationConfig]`): `Seq2SeqTrainingArguments.generation_config` argument. Returns: A `~generation.GenerationConfig`. """ # GenerationConfig provided, nothing to do if isinstance(gen_config_arg, GenerationConfig): gen_config = deepcopy(gen_config_arg) else: # str or Path pretrained_model_name = Path(gen_config_arg) if isinstance(gen_config_arg, str) else gen_config_arg config_file_name = None # Figuring if it is path pointing to a file, pointing to a directory or else a model id or URL # This step is required in order to determine config_file_name if pretrained_model_name.is_file(): config_file_name = pretrained_model_name.name pretrained_model_name = pretrained_model_name.parent # dir path elif pretrained_model_name.is_dir(): pass # model id or URL else: pretrained_model_name = gen_config_arg gen_config = GenerationConfig.from_pretrained(pretrained_model_name, config_file_name) # Strict validation to fail early. `GenerationConfig.save_pretrained()`, run at the end of training, throws # an exception if there are warnings at validation time. try: gen_config.validate(strict=True) except ValueError as exc: raise ValueError(str(exc) + "\n\nFix these issues to train your model.") return gen_config def evaluate( self, eval_dataset: Optional[Dataset] = None, ignore_keys: Optional[list[str]] = None, metric_key_prefix: str = "eval", **gen_kwargs, ) -> dict[str, float]: """ Run evaluation and returns metrics. The calling script will be responsible for providing a method to compute metrics, as they are task-dependent (pass it to the init `compute_metrics` argument). You can also subclass and override this method to inject custom behavior. Args: eval_dataset (`Dataset`, *optional*): Pass a dataset if you wish to override `self.eval_dataset`. If it is an [`~datasets.Dataset`], columns not accepted by the `model.forward()` method are automatically removed. It must implement the `__len__` method. ignore_keys (`list[str]`, *optional*): A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions. metric_key_prefix (`str`, *optional*, defaults to `"eval"`): An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named "eval_bleu" if the prefix is `"eval"` (default) max_length (`int`, *optional*): The maximum target length to use when predicting with the generate method. num_beams (`int`, *optional*): Number of beams for beam search that will be used when predicting with the generate method. 1 means no beam search. gen_kwargs: Additional `generate` specific kwargs. Returns: A dictionary containing the evaluation loss and the potential metrics computed from the predictions. The dictionary also contains the epoch number which comes from the training state. """ gen_kwargs = gen_kwargs.copy() # Use legacy argument setting if a) the option is not explicitly passed; and b) the argument is set in the # training args if ( gen_kwargs.get("max_length") is None and gen_kwargs.get("max_new_tokens") is None and self.args.generation_max_length is not None ): gen_kwargs["max_length"] = self.args.generation_max_length if gen_kwargs.get("num_beams") is None and self.args.generation_num_beams is not None: gen_kwargs["num_beams"] = self.args.generation_num_beams # We don't want to drop samples in general self.gather_function = self.accelerator.gather self._gen_kwargs = gen_kwargs return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) def predict( self, test_dataset: Dataset, ignore_keys: Optional[list[str]] = None, metric_key_prefix: str = "test", **gen_kwargs, ) -> "PredictionOutput": """ Run prediction and returns predictions and potential metrics. Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method will also return metrics, like in `evaluate()`. Args: test_dataset (`Dataset`): Dataset to run the predictions on. If it is a [`~datasets.Dataset`], columns not accepted by the `model.forward()` method are automatically removed. Has to implement the method `__len__` ignore_keys (`list[str]`, *optional*): A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions. metric_key_prefix (`str`, *optional*, defaults to `"eval"`): An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named "eval_bleu" if the prefix is `"eval"` (default) max_length (`int`, *optional*): The maximum target length to use when predicting with the generate method. num_beams (`int`, *optional*): Number of beams for beam search that will be used when predicting with the generate method. 1 means no beam search. gen_kwargs: Additional `generate` specific kwargs. <Tip> If your predictions or labels have different sequence lengths (for instance because you're doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100. </Tip> Returns: *NamedTuple* A namedtuple with the following keys: - predictions (`np.ndarray`): The predictions on `test_dataset`. - label_ids (`np.ndarray`, *optional*): The labels (if the dataset contained some). - metrics (`dict[str, float]`, *optional*): The potential dictionary of metrics (if the dataset contained labels). """ gen_kwargs = gen_kwargs.copy() # Use legacy argument setting if a) the option is not explicitly passed; and b) the argument is set in the # training args if ( gen_kwargs.get("max_length") is None and gen_kwargs.get("max_new_tokens") is None and self.args.generation_max_length is not None ): gen_kwargs["max_length"] = self.args.generation_max_length if gen_kwargs.get("num_beams") is None and self.args.generation_num_beams is not None: gen_kwargs["num_beams"] = self.args.generation_num_beams self.gather_function = self.accelerator.gather self._gen_kwargs = gen_kwargs return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) def prediction_step( self, model: nn.Module, inputs: dict[str, Union[torch.Tensor, Any]], prediction_loss_only: bool, ignore_keys: Optional[list[str]] = None, **gen_kwargs, ) -> tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]: """ Perform an evaluation step on `model` using `inputs`. Subclass and override to inject custom behavior. Args: model (`nn.Module`): The model to evaluate. inputs (`dict[str, Union[torch.Tensor, Any]]`): The inputs and targets of the model. The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument `labels`. Check your model's documentation for all accepted arguments. prediction_loss_only (`bool`): Whether or not to return the loss only. gen_kwargs: Additional `generate` specific kwargs. Return: tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]: A tuple with the loss, logits and labels (each being optional). """ if not self.args.predict_with_generate or prediction_loss_only: return super().prediction_step( model, inputs, prediction_loss_only=prediction_loss_only, ignore_keys=ignore_keys ) has_labels = "labels" in inputs inputs = self._prepare_inputs(inputs) # Priority (handled in generate): # non-`None` gen_kwargs > model.generation_config > default GenerationConfig() if len(gen_kwargs) == 0 and hasattr(self, "_gen_kwargs"): gen_kwargs = self._gen_kwargs.copy() if "num_beams" in gen_kwargs and gen_kwargs["num_beams"] is None: gen_kwargs.pop("num_beams") if "max_length" in gen_kwargs and gen_kwargs["max_length"] is None: gen_kwargs.pop("max_length") default_synced_gpus = is_deepspeed_zero3_enabled() or is_fsdp_managed_module(self.model) gen_kwargs["synced_gpus"] = gen_kwargs.get("synced_gpus", default_synced_gpus) generation_inputs = inputs.copy() # If the `decoder_input_ids` was created from `labels`, evict the former, so that the model can freely generate # (otherwise, it would continue generating from the padded `decoder_input_ids`) if ( "labels" in generation_inputs and "decoder_input_ids" in generation_inputs and generation_inputs["labels"].shape == generation_inputs["decoder_input_ids"].shape ): generation_inputs = { k: v for k, v in inputs.items() if k not in ("decoder_input_ids", "decoder_attention_mask") } summon_full_params_context = ( FullyShardedDataParallel.summon_full_params(self.model) if isinstance(self.model, FullyShardedDataParallel) else contextlib.nullcontext() ) with summon_full_params_context: generated_tokens = self.model.generate(**generation_inputs, **gen_kwargs) # Temporary hack to ensure the generation config is not initialized for each iteration of the evaluation loop # TODO: remove this hack when the legacy code that initializes generation_config from a model config is # removed in https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/utils.py#L1183 if self.model.generation_config._from_model_config: self.model.generation_config._from_model_config = False # Retrieves GenerationConfig from model.generation_config gen_config = self.model.generation_config # in case the batch is shorter than max length, the output should be padded if generated_tokens.shape[-1] < gen_config.max_length: generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_config.max_length) elif gen_config.max_new_tokens is not None and generated_tokens.shape[-1] < gen_config.max_new_tokens + 1: generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_config.max_new_tokens + 1) with torch.no_grad(): if has_labels: with self.compute_loss_context_manager(): outputs = model(**inputs) if self.label_smoother is not None: loss = self.label_smoother(outputs, inputs["labels"]).detach().mean() else: loss = (outputs["loss"] if isinstance(outputs, dict) else outputs[0]).detach().mean() else: loss = None if self.args.prediction_loss_only: return loss, None, None if has_labels: labels = inputs["labels"] if labels.shape[-1] < gen_config.max_length: labels = self._pad_tensors_to_max_len(labels, gen_config.max_length) elif gen_config.max_new_tokens is not None and labels.shape[-1] < gen_config.max_new_tokens + 1: labels = self._pad_tensors_to_max_len(labels, gen_config.max_new_tokens + 1) else: labels = None return loss, generated_tokens, labels def _pad_tensors_to_max_len(self, tensor, max_length): if self.processing_class is not None and hasattr(self.processing_class, "pad_token_id"): # If PAD token is not defined at least EOS token has to be defined pad_token_id = ( self.processing_class.pad_token_id if self.processing_class.pad_token_id is not None else self.processing_class.eos_token_id ) else: if self.model.config.pad_token_id is not None: pad_token_id = self.model.config.pad_token_id else: raise ValueError("Pad_token_id must be set in the configuration of the model, in order to pad tensors") padded_tensor = pad_token_id * torch.ones( (tensor.shape[0], max_length), dtype=tensor.dtype, device=tensor.device ) padded_tensor[:, : tensor.shape[-1]] = tensor return padded_tensor
transformers/src/transformers/trainer_seq2seq.py/0
{ "file_path": "transformers/src/transformers/trainer_seq2seq.py", "repo_id": "transformers", "token_count": 7385 }
564
# This file is autogenerated by the command `make fix-copies`, do not edit. from ..utils import DummyObject, requires_backends class FlaxForcedBOSTokenLogitsProcessor(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) class FlaxForcedEOSTokenLogitsProcessor(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) class FlaxForceTokensLogitsProcessor(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) class FlaxGenerationMixin(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) class FlaxLogitsProcessor(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) class FlaxLogitsProcessorList(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) class FlaxLogitsWarper(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) class FlaxMinLengthLogitsProcessor(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) class FlaxSuppressTokensAtBeginLogitsProcessor(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) class FlaxSuppressTokensLogitsProcessor(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) class FlaxTemperatureLogitsWarper(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) class FlaxTopKLogitsWarper(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) class FlaxTopPLogitsWarper(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) class FlaxWhisperTimeStampLogitsProcessor(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"]) class FlaxPreTrainedModel(metaclass=DummyObject): _backends = ["flax"] def __init__(self, *args, **kwargs): requires_backends(self, ["flax"])
transformers/src/transformers/utils/dummy_flax_objects.py/0
{ "file_path": "transformers/src/transformers/utils/dummy_flax_objects.py", "repo_id": "transformers", "token_count": 1090 }
565
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import copy import re class TrialShortNamer: PREFIX = "hp" DEFAULTS = {} NAMING_INFO = None @classmethod def set_defaults(cls, prefix, defaults): cls.PREFIX = prefix cls.DEFAULTS = defaults cls.build_naming_info() @staticmethod def shortname_for_word(info, word): if len(word) == 0: return "" short_word = None if any(char.isdigit() for char in word): raise Exception(f"Parameters should not contain numbers: '{word}' contains a number") if word in info["short_word"]: return info["short_word"][word] for prefix_len in range(1, len(word) + 1): prefix = word[:prefix_len] if prefix in info["reverse_short_word"]: continue else: short_word = prefix break if short_word is None: # Paranoid fallback def int_to_alphabetic(integer): s = "" while integer != 0: s = chr(ord("A") + integer % 10) + s integer //= 10 return s i = 0 while True: sword = word + "#" + int_to_alphabetic(i) if sword in info["reverse_short_word"]: continue else: short_word = sword break info["short_word"][word] = short_word info["reverse_short_word"][short_word] = word return short_word @staticmethod def shortname_for_key(info, param_name): words = param_name.split("_") shortname_parts = [TrialShortNamer.shortname_for_word(info, word) for word in words] # We try to create a separatorless short name, but if there is a collision we have to fallback # to a separated short name separators = ["", "_"] for separator in separators: shortname = separator.join(shortname_parts) if shortname not in info["reverse_short_param"]: info["short_param"][param_name] = shortname info["reverse_short_param"][shortname] = param_name return shortname return param_name @staticmethod def add_new_param_name(info, param_name): short_name = TrialShortNamer.shortname_for_key(info, param_name) info["short_param"][param_name] = short_name info["reverse_short_param"][short_name] = param_name @classmethod def build_naming_info(cls): if cls.NAMING_INFO is not None: return info = { "short_word": {}, "reverse_short_word": {}, "short_param": {}, "reverse_short_param": {}, } field_keys = list(cls.DEFAULTS.keys()) for k in field_keys: cls.add_new_param_name(info, k) cls.NAMING_INFO = info @classmethod def shortname(cls, params): cls.build_naming_info() assert cls.PREFIX is not None name = [copy.copy(cls.PREFIX)] for k, v in params.items(): if k not in cls.DEFAULTS: raise Exception(f"You should provide a default value for the param name {k} with value {v}") if v == cls.DEFAULTS[k]: # The default value is not added to the name continue key = cls.NAMING_INFO["short_param"][k] if isinstance(v, bool): v = 1 if v else 0 sep = "" if isinstance(v, (int, float)) else "-" e = f"{key}{sep}{v}" name.append(e) return "_".join(name) @classmethod def parse_repr(cls, repr): repr = repr[len(cls.PREFIX) + 1 :] if repr == "": values = [] else: values = repr.split("_") parameters = {} for value in values: if "-" in value: p_k, p_v = value.split("-") else: p_k = re.sub("[0-9.]", "", value) p_v = float(re.sub("[^0-9.]", "", value)) key = cls.NAMING_INFO["reverse_short_param"][p_k] parameters[key] = p_v for k in cls.DEFAULTS: if k not in parameters: parameters[k] = cls.DEFAULTS[k] return parameters
transformers/src/transformers/utils/hp_naming.py/0
{ "file_path": "transformers/src/transformers/utils/hp_naming.py", "repo_id": "transformers", "token_count": 2393 }
566
{ "modelname": "BrandNewBERT", "uppercase_modelname": "BRAND_NEW_BERT", "lowercase_modelname": "brand_new_bert", "camelcase_modelname": "BrandNewBert", "has_slow_class": ["True", "False"], "has_fast_class": ["True", "False"], "slow_tokenizer_use_sentencepiece": ["True", "False"], "authors": "The HuggingFace Team" }
transformers/templates/adding_a_missing_tokenization_test/cookiecutter.json/0
{ "file_path": "transformers/templates/adding_a_missing_tokenization_test/cookiecutter.json", "repo_id": "transformers", "token_count": 129 }
567
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import dataclasses import itertools import json import os import unittest from copy import deepcopy from functools import partial from typing import Optional import datasets from parameterized import parameterized import tests.trainer.test_trainer import transformers from tests.trainer.test_trainer import TrainerIntegrationCommon # noqa from transformers import AutoModel, TrainingArguments, is_torch_available, logging from transformers.integrations.deepspeed import ( HfDeepSpeedConfig, is_deepspeed_available, unset_hf_deepspeed_config, ) from transformers.testing_utils import ( CaptureLogger, CaptureStd, CaptureStderr, LoggingLevel, TestCasePlus, backend_device_count, execute_subprocess_async, mockenv_context, require_deepspeed, require_optuna, require_torch_accelerator, require_torch_fp16, require_torch_multi_accelerator, run_first, slow, torch_device, ) from transformers.trainer_utils import get_last_checkpoint, set_seed from transformers.utils import SAFE_WEIGHTS_NAME, is_torch_bf16_available_on_device, is_torch_fp16_available_on_device if is_torch_available(): import torch from tests.trainer.test_trainer import ( # noqa RegressionModelConfig, RegressionPreTrainedModel, ) # hack to restore original logging level pre #21700 get_regression_trainer = partial(tests.trainer.test_trainer.get_regression_trainer, log_level="info") set_seed(42) # default torch.distributed port DEFAULT_MASTER_PORT = "10999" T5_SMALL = "google-t5/t5-small" T5_TINY = "patrickvonplaten/t5-tiny-random" GPT2_TINY = "sshleifer/tiny-gpt2" GPTJ_TINY = "hf-internal-testing/tiny-random-gptj" def load_json(path): with open(path) as f: return json.load(f) def get_master_port(real_launcher=False): """ When using a single gpu launcher emulation (i.e. not deepspeed or python -m torch.distributed) the issue is that once the port is tied it can't be used anywhere else outside of this process, since torch.dist doesn't free the port until the process exits. Therefore for the sake of being able to run both emulated launcher and normal launcher tests we need 2 distinct ports. This function will give the right port in the right context. For real launcher it'll give the base port, for emulated launcher it'll give the base port + 1. In both cases a string is returned. Args: `real_launcher`: whether a real launcher is going to be used, or the emulated one """ master_port_base = os.environ.get("DS_TEST_PORT", DEFAULT_MASTER_PORT) if not real_launcher: master_port_base = str(int(master_port_base) + 1) return master_port_base def require_deepspeed_aio(test_case): """ Decorator marking a test that requires deepspeed aio (nvme) """ if not is_deepspeed_available(): return unittest.skip(reason="test requires deepspeed")(test_case) import deepspeed from deepspeed.ops.aio import AsyncIOBuilder if not deepspeed.ops.__compatible_ops__[AsyncIOBuilder.NAME]: return unittest.skip(reason="test requires deepspeed async-io")(test_case) else: return test_case if is_deepspeed_available(): from deepspeed.utils import logger as deepspeed_logger # noqa from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint from transformers.integrations.deepspeed import deepspeed_config, is_deepspeed_zero3_enabled # noqa def get_launcher(distributed=False): # 1. explicitly set --num_nodes=1 just in case these tests end up run on a multi-node setup # - it won't be able to handle that # 2. for now testing with just 2 gpus max (since some quality tests may give different # results with mode gpus because we use very little data) num_gpus = min(2, backend_device_count(torch_device)) if distributed else 1 master_port = get_master_port(real_launcher=True) return f"deepspeed --num_nodes 1 --num_gpus {num_gpus} --master_port {master_port}".split() ZERO2 = "zero2" ZERO3 = "zero3" FP16 = "fp16" BF16 = "bf16" HF_OPTIM = "hf_optim" HF_SCHEDULER = "hf_scheduler" DS_OPTIM = "ds_optim" DS_SCHEDULER = "ds_scheduler" optims = [HF_OPTIM, DS_OPTIM] schedulers = [HF_SCHEDULER, DS_SCHEDULER] stages = [ZERO2, ZERO3] dtypes = [] if is_torch_bf16_available_on_device(torch_device): dtypes.append(BF16) if is_torch_fp16_available_on_device(torch_device): dtypes.append(FP16) def parameterized_custom_name_func(func, param_num, param): # customize the test name generator function as we want both params to appear in the sub-test # name, as by default it shows only the first param param_based_name = parameterized.to_safe_name("_".join(str(x) for x in param.args)) return f"{func.__name__}_{param_based_name}" # Cartesian-product of zero stages with models to test params = list(itertools.product(stages, dtypes)) params_with_optims_and_schedulers = list(itertools.product(stages, dtypes, optims, schedulers)) @require_deepspeed class CoreIntegrationDeepSpeed(TestCasePlus, TrainerIntegrationCommon): """ Testing non-Trainer DeepSpeed integration """ def setUp(self): super().setUp() master_port = get_master_port(real_launcher=False) self.dist_env_1_gpu = { "MASTER_ADDR": "localhost", "MASTER_PORT": master_port, "RANK": "0", "LOCAL_RANK": "0", "WORLD_SIZE": "1", } def tearDown(self): super().tearDown() # reset the ds config global so that tests state doesn't leak unset_hf_deepspeed_config() def test_init_zero3(self): # test that zero.Init() works correctly ds_config = { "train_batch_size": 1, "zero_optimization": { "stage": 3, }, } dschf = HfDeepSpeedConfig(ds_config) self.assertTrue(dschf.is_zero3()) self.assertTrue(is_deepspeed_zero3_enabled()) with LoggingLevel(logging.INFO): with mockenv_context(**self.dist_env_1_gpu): logger = logging.get_logger("transformers.modeling_utils") with CaptureLogger(logger) as cl: AutoModel.from_pretrained(T5_TINY) self.assertIn("Detected DeepSpeed ZeRO-3", cl.out) # now remove zero optimization del ds_config["zero_optimization"] dschf = HfDeepSpeedConfig(ds_config) self.assertFalse(dschf.is_zero3()) self.assertFalse(is_deepspeed_zero3_enabled()) with LoggingLevel(logging.INFO): with mockenv_context(**self.dist_env_1_gpu): logger = logging.get_logger("transformers.modeling_utils") with CaptureLogger(logger) as cl: AutoModel.from_pretrained(T5_TINY) self.assertNotIn("Detected DeepSpeed ZeRO-3", cl.out) @require_torch_fp16 @require_torch_accelerator def test_init_zero3_fp16(self): # test that zero.Init() works correctly under zero3/fp16 ds_config = { "train_batch_size": 1, "zero_optimization": { "stage": 3, }, "fp16": { "enabled": True, }, } dschf = HfDeepSpeedConfig(ds_config) self.assertTrue(dschf.is_zero3()) self.assertTrue(is_deepspeed_zero3_enabled()) with LoggingLevel(logging.INFO): with mockenv_context(**self.dist_env_1_gpu): logger = logging.get_logger("transformers.modeling_utils") with CaptureLogger(logger) as cl: AutoModel.from_pretrained(T5_TINY) self.assertIn("Detected DeepSpeed ZeRO-3", cl.out) # now remove zero optimization del ds_config["zero_optimization"] dschf = HfDeepSpeedConfig(ds_config) self.assertFalse(dschf.is_zero3()) self.assertFalse(is_deepspeed_zero3_enabled()) with LoggingLevel(logging.INFO): with mockenv_context(**self.dist_env_1_gpu): logger = logging.get_logger("transformers.modeling_utils") with CaptureLogger(logger) as cl: AutoModel.from_pretrained(T5_TINY) self.assertNotIn("Detected DeepSpeed ZeRO-3", cl.out) def test_init_zero3_missing_params(self): # test that zero.Init() for missing parameters works correctly under zero3 import deepspeed import torch from transformers.models.gpt2.modeling_gpt2 import GPT2PreTrainedModel class TinyGPT2WithUninitializedWeights(GPT2PreTrainedModel): def __init__(self, config): super().__init__(config) self.transformer = AutoModel.from_pretrained(GPT2_TINY, config=config) self.new_head = torch.nn.Linear(config.hidden_size, config.vocab_size, bias=True) def forward(self, *args, **kwargs): transformer_outputs = self.transformer(*args, **kwargs) hidden_states = transformer_outputs[0] return self.new_head(hidden_states).float() def _init_weights(self, module): super()._init_weights(module) if module is self.new_head: self.new_head.weight.data.fill_(-100.0) self.new_head.bias.data.fill_(+100.0) ds_config = { "train_batch_size": 1, "zero_optimization": { "stage": 3, }, } dschf = HfDeepSpeedConfig(ds_config) self.assertTrue(dschf.is_zero3()) self.assertTrue(is_deepspeed_zero3_enabled()) with LoggingLevel(logging.INFO): with mockenv_context(**self.dist_env_1_gpu): logger = logging.get_logger("transformers.modeling_utils") with CaptureLogger(logger) as cl: model = TinyGPT2WithUninitializedWeights.from_pretrained(GPT2_TINY) self.assertIn("Detected DeepSpeed ZeRO-3", cl.out) self.assertRegex(cl.out, r"newly initialized.*new_head\.bias.*new_head\.weight") with deepspeed.zero.GatheredParameters([model.new_head.weight, model.new_head.bias]): self.assertTrue( torch.allclose(model.new_head.weight, torch.tensor(-100.0, device=model.new_head.weight.device)), ) self.assertTrue( torch.allclose(model.new_head.bias, torch.tensor(+100.0, device=model.new_head.bias.device)), ) # now remove zero optimization del ds_config["zero_optimization"] dschf = HfDeepSpeedConfig(ds_config) self.assertFalse(dschf.is_zero3()) self.assertFalse(is_deepspeed_zero3_enabled()) with LoggingLevel(logging.INFO): with mockenv_context(**self.dist_env_1_gpu): logger = logging.get_logger("transformers.modeling_utils") with CaptureLogger(logger) as cl: model = TinyGPT2WithUninitializedWeights.from_pretrained(GPT2_TINY) self.assertNotIn("Detected DeepSpeed ZeRO-3", cl.out) self.assertRegex(cl.out, r"newly initialized.*new_head\.bias.*new_head\.weight") self.assertTrue( torch.allclose(model.new_head.weight, torch.tensor(-100.0, device=model.new_head.weight.device)), ) self.assertTrue( torch.allclose(model.new_head.bias, torch.tensor(+100.0, device=model.new_head.bias.device)), ) def test_arange_bf16(self): # Tests that configuring DeepSpeed with 16 bits does not cause float `torch.arange()` tensors to be cast down. # NOTE -- this assumes that the function calls have the following downcast-preventing pattern, i.e. # `torch.arange(...,dtype=torch.int64)` followed by a cast like `.to(torch.float32)`. 🚨 If this pattern is # NOT applied (e.g. `torch.arange(...,dtype=torch.float32)` is used), DeepSpeed can automatically cast it down # at init time. See https://github.com/huggingface/transformers/issues/28685 for more info. ds_config = { "train_batch_size": 1, "zero_optimization": { "stage": 3, }, "bf16": {"enabled": True}, } dschf = HfDeepSpeedConfig(ds_config) self.assertTrue(dschf.is_zero3()) self.assertTrue(is_deepspeed_zero3_enabled()) with LoggingLevel(logging.INFO): with mockenv_context(**self.dist_env_1_gpu): logger = logging.get_logger("transformers.modeling_utils") with CaptureLogger(logger) as cl: model = AutoModel.from_pretrained(GPTJ_TINY) self.assertIn("Detected DeepSpeed ZeRO-3", cl.out) # The model weights are in BF16 as per deepspeed config self.assertTrue(str(model.h[0].attn.q_proj.weight.dtype) == "torch.bfloat16") good_deepspeed_sin_cos = model.h[0].attn.embed_positions # Monkeypatches the function that creates RoPE embeddings using the INCORRECT torch.arange() pattern, and # then recreates the model def bad_deepspeed_create_sinusoidal_positions(num_pos: int, dim: int) -> torch.Tensor: inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2, dtype=torch.int64) / dim)) # Incorrect pattern here: torch.arange has dtype=torch.float32 as its argument, and it will automatically # converted to BF16 by DeepSpeed sinusoid_inp = torch.einsum("i , j -> i j", torch.arange(num_pos, dtype=inv_freq.dtype), inv_freq) return torch.cat((torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)), dim=1) good_deepspeed_create_sinusoidal_positions = transformers.models.gptj.modeling_gptj.create_sinusoidal_positions transformers.models.gptj.modeling_gptj.create_sinusoidal_positions = bad_deepspeed_create_sinusoidal_positions with LoggingLevel(logging.INFO): with mockenv_context(**self.dist_env_1_gpu): logger = logging.get_logger("transformers.modeling_utils") with CaptureLogger(logger) as cl: model = AutoModel.from_pretrained(GPTJ_TINY) self.assertIn("Detected DeepSpeed ZeRO-3", cl.out) self.assertTrue(str(model.h[0].attn.q_proj.weight.dtype) == "torch.bfloat16") bad_deepspeed_sin_cos = model.h[0].attn.embed_positions # Compares the two values: the two sets of values are different, and the correct one matches the torch # (i.e. outside DeepSpeed) version. good_torch_sin_cos = good_deepspeed_create_sinusoidal_positions( model.config.max_position_embeddings, model.config.rotary_dim ) self.assertFalse(torch.allclose(good_deepspeed_sin_cos, bad_deepspeed_sin_cos)) torch.testing.assert_close(good_torch_sin_cos, good_deepspeed_sin_cos.cpu()) # Finally, we can see that the incorrect pattern is okay on vanilla torch, demonstrating that this issue is # exclusive to DeepSpeed bad_torch_sin_cos = bad_deepspeed_create_sinusoidal_positions( model.config.max_position_embeddings, model.config.rotary_dim ) torch.testing.assert_close(bad_torch_sin_cos, good_torch_sin_cos) class TrainerIntegrationDeepSpeedWithCustomConfig(TestCasePlus): def setUp(self): super().setUp() args = TrainingArguments(".") self.n_epochs = args.num_train_epochs self.batch_size = args.train_batch_size master_port = get_master_port(real_launcher=False) self.dist_env_1_gpu = { "MASTER_ADDR": "localhost", "MASTER_PORT": master_port, "RANK": "0", "LOCAL_RANK": "0", "WORLD_SIZE": "1", } self.ds_config_file = { "zero2": f"{self.test_file_dir_str}/ds_config_zero2.json", "zero3": f"{self.test_file_dir_str}/ds_config_zero3.json", } # use self.get_config_dict(stage) to use these to ensure the original is not modified with open(self.ds_config_file[ZERO2], encoding="utf-8") as f: config_zero2 = json.load(f) with open(self.ds_config_file[ZERO3], encoding="utf-8") as f: config_zero3 = json.load(f) # The following setting slows things down, so don't enable it by default unless needed by a test. # It's in the file as a demo for users since we want everything to work out of the box even if slower. config_zero3["zero_optimization"]["stage3_gather_16bit_weights_on_model_save"] = False self.ds_config_dict = { "zero2": config_zero2, "zero3": config_zero3, } def tearDown(self): super().tearDown() # reset the ds config global so that tests state doesn't leak unset_hf_deepspeed_config() def get_config_dict(self, stage): # As some tests modify the dict, always make a copy return deepcopy(self.ds_config_dict[stage]) @require_deepspeed @require_torch_fp16 @require_torch_accelerator class TrainerIntegrationDeepSpeed(TrainerIntegrationDeepSpeedWithCustomConfig, TrainerIntegrationCommon): """ This class is for testing directly via get_regression_trainer It mixes in `TrainerIntegrationCommon` which already has a lot of helper validation methods which we can re-use here. Important: this class' setup can only work with a single gpu because it runs within the current pytest worker. For multi-gpu tests use TestDeepSpeedWithLauncher. Note: if any of the tests of this class get run there will be at least one gpu occupied by them until this pytest worker exits. This is because the gpu memory allocated by the cuda-kernels won't be released until this pytest worker exits. This may appear as some run-away tests if you watch `nvidia-smi` while other tests that fork new processes are run. So there will be one or two "stale" processes reported in `nvidia-smi`. This is not a bug. """ # --- These tests are enough to run on one of zero stages --- # def test_hf_ds_config_mismatch(self): ds_config = self.get_config_dict(ZERO2) # Purposefully configure these values to mismatch TrainingArguments values. # This currently doesn't cover all keys (but it could) per_device_train_batch_size = 2 ds_config["train_micro_batch_size_per_gpu"] = per_device_train_batch_size + 2 ds_config["train_batch_size"] = 1000 gradient_accumulation_steps = 2 ds_config["gradient_accumulation_steps"] = gradient_accumulation_steps + 2 max_grad_norm = 1.0 ds_config["gradient_clipping"] = max_grad_norm + 0.1 adam_beta1, adam_beta2 = 0.9, 0.99 ds_config["optimizer"]["params"]["betas"] = [adam_beta1 - 0.1, adam_beta2 - 0.1] fp16 = True ds_config["fp16"]["enabled"] = not fp16 keys = [ "per_device_train_batch_size", "train_batch_size", "gradient_accumulation_steps", "max_grad_norm", "betas", "fp16", ] with mockenv_context(**self.dist_env_1_gpu): trainer = get_regression_trainer( local_rank=0, fp16=fp16, deepspeed=ds_config, per_device_train_batch_size=per_device_train_batch_size, gradient_accumulation_steps=gradient_accumulation_steps, max_grad_norm=max_grad_norm, adam_beta1=adam_beta1, adam_beta2=adam_beta2, output_dir=self.get_auto_remove_tmp_dir(), ) with self.assertRaises(Exception) as context: trainer.train() for key in keys: self.assertTrue( key in str(context.exception), f"{key} is not in the exception message:\n{context.exception}", ) # Test various combos # 1. DS scheduler + DS optimizer: this is already tested by most other tests # 2. HF scheduler + HF optimizer: # 3. DS scheduler + HF optimizer: # 4. HF scheduler + DS optimizer: def test_hf_scheduler_hf_optimizer(self): a = 0 with mockenv_context(**self.dist_env_1_gpu): ds_config_zero2_dict = self.get_config_dict(ZERO2) del ds_config_zero2_dict["optimizer"] # force default HF Trainer optimizer del ds_config_zero2_dict["scheduler"] # force default HF Trainer scheduler ds_config_zero2_dict["zero_optimization"]["offload_optimizer"]["device"] = "none" ds_config_zero2_dict["fp16"]["initial_scale_power"] = 1 # force optimizer on the first step trainer = get_regression_trainer( a=a, local_rank=0, fp16=True, deepspeed=ds_config_zero2_dict, output_dir=self.get_auto_remove_tmp_dir() ) trainer.train() new_a = trainer.model.a.item() self.assertNotEqual(new_a, a) def test_ds_scheduler_hf_optimizer(self): a = 0 with mockenv_context(**self.dist_env_1_gpu): ds_config_zero2_dict = self.get_config_dict(ZERO2) del ds_config_zero2_dict["optimizer"] # force default HF Trainer optimizer ds_config_zero2_dict["zero_optimization"]["offload_optimizer"]["device"] = "none" ds_config_zero2_dict["fp16"]["initial_scale_power"] = 1 # force optimizer on the first step trainer = get_regression_trainer( a=a, local_rank=0, fp16=True, deepspeed=ds_config_zero2_dict, output_dir=self.get_auto_remove_tmp_dir() ) trainer.train() new_a = trainer.model.a.item() self.assertNotEqual(new_a, a) def test_hf_scheduler_ds_optimizer(self): a = 0 with mockenv_context(**self.dist_env_1_gpu): ds_config_zero2_dict = self.get_config_dict(ZERO2) del ds_config_zero2_dict["scheduler"] # force default HF Trainer scheduler ds_config_zero2_dict["zero_optimization"]["offload_optimizer"]["device"] = "none" ds_config_zero2_dict["fp16"]["initial_scale_power"] = 1 # force optimizer on the first step trainer = get_regression_trainer( a=a, local_rank=0, fp16=True, deepspeed=ds_config_zero2_dict, output_dir=self.get_auto_remove_tmp_dir() ) trainer.train() new_a = trainer.model.a.item() self.assertNotEqual(new_a, a) @require_deepspeed_aio def test_stage3_nvme_offload(self): with mockenv_context(**self.dist_env_1_gpu): # this actually doesn't have to be on NVMe, any storage will do since this test only # runs a simple check that we can use some directory as if it were NVMe nvme_path = self.get_auto_remove_tmp_dir() nvme_config = {"device": "nvme", "nvme_path": nvme_path} ds_config_zero3_dict = self.get_config_dict(ZERO3) ds_config_zero3_dict["zero_optimization"]["offload_optimizer"] = nvme_config ds_config_zero3_dict["zero_optimization"]["offload_param"] = nvme_config ds_config_zero3_dict["zero_optimization"]["stage3_gather_16bit_weights_on_model_save"] = True trainer = get_regression_trainer( local_rank=0, fp16=True, deepspeed=ds_config_zero3_dict, output_dir=self.get_auto_remove_tmp_dir() ) with CaptureLogger(deepspeed_logger) as cl: trainer.train() self.assertIn("DeepSpeed info", cl.out, "expected DeepSpeed logger output but got none") @require_optuna def test_hyperparameter_search(self): with mockenv_context(**self.dist_env_1_gpu): ds_config_zero3_dict = self.get_config_dict(ZERO3) # hyperparameter_search requires model_init() to recreate the model for each trial def model_init(): config = RegressionModelConfig(a=0, b=0, double_output=False) model = RegressionPreTrainedModel(config) return model trainer = get_regression_trainer( local_rank=0, fp16=True, model_init=model_init, deepspeed=ds_config_zero3_dict, output_dir=self.get_auto_remove_tmp_dir(), ) n_trials = 3 with CaptureLogger(deepspeed_logger) as cl: with CaptureStd() as cs: trainer.hyperparameter_search(direction="maximize", n_trials=n_trials) self.assertIn("DeepSpeed info", cl.out, "expected DeepSpeed logger output but got none") self.assertIn(f"Trial {n_trials - 1} finished with value", cs.err, "expected hyperparameter_search output") self.assertIn("Best is trial", cs.err, "expected hyperparameter_search output") # --- These tests need to run on both zero stages --- # @parameterized.expand(params, name_func=parameterized_custom_name_func) def test_hf_optimizer_with_offload(self, stage, dtype): # non-DS optimizers can be used with ZERO-offload (as long as they have both CPU and GPU implementation (except LAMB)) ds_config_dict = self.get_config_dict(stage) del ds_config_dict["optimizer"] # force default HF Trainer optimizer # force cpu offload ds_config_dict["zero_optimization"]["offload_optimizer"]["device"] = "cpu" ds_config_dict["zero_force_ds_cpu_optimizer"] = False # offload is not efficient w/o CPUAdam with mockenv_context(**self.dist_env_1_gpu): kwargs = {"local_rank": 0, "deepspeed": ds_config_dict, "output_dir": self.get_auto_remove_tmp_dir()} kwargs[dtype] = True trainer = get_regression_trainer(**kwargs) with CaptureLogger(deepspeed_logger) as cl: trainer.train() self.assertIn("DeepSpeed info", cl.out, "expected DeepSpeed logger output but got none") @parameterized.expand(params, name_func=parameterized_custom_name_func) def test_fake_notebook_no_launcher(self, stage, dtype): # this setup emulates a notebook where a launcher needs to be emulated by hand # note that unittest resets sys.stdout each test, so `CaptureStd` will work here to capture # DeepSpeed log if this test happens to run first in this pytest worker. But it will fail if # it's run not as a first test as `sys.stdout` will no longer be the same. So we either have # to reset `deepspeed_logger.handlers[0].setStream(sys.stdout)` or directly capture from the deepspeed_logger. with mockenv_context(**self.dist_env_1_gpu): kwargs = { "local_rank": 0, "deepspeed": self.get_config_dict(stage), "output_dir": self.get_auto_remove_tmp_dir(), } kwargs[dtype] = True trainer = get_regression_trainer(**kwargs) with CaptureLogger(deepspeed_logger) as cl: trainer.train() self.assertIn("DeepSpeed info", cl.out, "expected DeepSpeed logger output but got none") @parameterized.expand(params, name_func=parameterized_custom_name_func) def test_early_get_last_lr(self, stage, dtype): # with deepspeed's fp16 and dynamic loss scale enabled the optimizer/scheduler steps may # not run for the first few dozen steps while loss scale is too large, and thus during # that time `get_last_lr` will fail if called during that warm up stage, # # setting `logging_steps=1` forces an early `trainer._maybe_log_save_evaluate()` which calls # `self.lr_scheduler.get_last_lr()` and originally it'd fail on the very first step. with mockenv_context(**self.dist_env_1_gpu): a = b = 0.0 kwargs = { "a": a, "b": b, "local_rank": 0, "train_len": 8, "deepspeed": self.get_config_dict(stage), "per_device_train_batch_size": 8, "logging_steps": 1, "output_dir": self.get_auto_remove_tmp_dir(), } kwargs[dtype] = True trainer = get_regression_trainer(**kwargs) trainer.train() post_train_a = trainer.model.a.item() # XXX: for some reason the following check fails with zero3/fp16 and any/bf16 - not a # broken but a different qualitative outcome - as if optimizer did run # oddly getting 1.0 for both a and b from 0.0 - there is a bug somewhere # print(trainer.model.a.item()) # print(trainer.model.b.item()) # need to investigate at some point if (stage == ZERO3 and dtype == FP16) or (dtype == BF16): self.skipTest(reason="When using zero3/fp16 or any/bf16 the optimizer seems run oddly") # it's enough that train didn't fail for this test, but we must check that # optimizer/scheduler didn't run (since if it did this test isn't testing the right thing) self.assertEqual(post_train_a, a) @parameterized.expand(params, name_func=parameterized_custom_name_func) def test_gradient_accumulation(self, stage, dtype): # this test measures that we get identical weights and similar loss with: # 1. per_device_train_batch_size=8, gradient_accumulation_steps=1 # 2. per_device_train_batch_size=4, gradient_accumulation_steps=2 # since the 2nd should produce the effective batch of 1st, with the same results # # I can get an identical loss for a small train_len=32, plus the power of the initial # dynamic loss scale value set to: # "fp16.initial_scale_power": 1 # plus having the same WarmupLR's warmup_min_lr == warmup_max_lr in the config file # but for some reason going to train_len=64, the weights start to mismatch with this setup. # the culprit seems to be `initial_scale_power` - putting it back to its default 32 keeps the weights identical train_len = 64 a = b = 0.0 kwargs = { "a": a, "b": b, "local_rank": 0, "train_len": train_len, "deepspeed": self.get_config_dict(stage), "output_dir": self.get_auto_remove_tmp_dir(), } kwargs[dtype] = True with mockenv_context(**self.dist_env_1_gpu): no_grad_accum_trainer = get_regression_trainer( **kwargs, per_device_train_batch_size=16, gradient_accumulation_steps=1, ) no_grad_accum_result = no_grad_accum_trainer.train() no_grad_accum_loss = no_grad_accum_result.training_loss no_grad_accum_a = no_grad_accum_trainer.model.a.item() no_grad_accum_b = no_grad_accum_trainer.model.b.item() # make sure the optimizer kicked in - if it hasn't changed from the original value of a then make train_len bigger self.assertNotEqual(no_grad_accum_a, a) with mockenv_context(**self.dist_env_1_gpu): yes_grad_accum_trainer = get_regression_trainer( **kwargs, per_device_train_batch_size=4, gradient_accumulation_steps=4, ) yes_grad_accum_result = yes_grad_accum_trainer.train() yes_grad_accum_loss = yes_grad_accum_result.training_loss yes_grad_accum_a = yes_grad_accum_trainer.model.a.item() yes_grad_accum_b = yes_grad_accum_trainer.model.b.item() self.assertNotEqual(yes_grad_accum_a, a) # training with half the batch size but accumulation steps as 2 should give the same # weights, but sometimes get a slight difference still of 1e-6 if torch_device == "hpu": self.assertAlmostEqual(no_grad_accum_a, yes_grad_accum_a, delta=1e-4) self.assertAlmostEqual(no_grad_accum_b, yes_grad_accum_b, delta=1e-4) else: self.assertAlmostEqual(no_grad_accum_a, yes_grad_accum_a, places=5) self.assertAlmostEqual(no_grad_accum_b, yes_grad_accum_b, places=5) # Relative difference. See the note above how to get identical loss on a small bs self.assertTrue((no_grad_accum_loss - yes_grad_accum_loss) / (no_grad_accum_loss + 1e-15) <= 1e-3) # NOTE: Currently a disabled test. In the future we should re-enable it. # Issue resolves around Zero-3 w/ DPO/TRL + DeepSpeed # As well as Zero-3 inference # Related PR: https://github.com/huggingface/transformers/pull/32299 # def test_missed_zero3_init(self): # from transformers import Trainer # noqa # with mockenv_context(**self.dist_env_1_gpu): # model = AutoModel.from_pretrained(T5_TINY) # training_args = TrainingArguments( # output_dir="./test_missed_zero3_init", # deepspeed=self.get_config_dict(ZERO3), # ) # with self.assertRaises( # ValueError, msg="Model was not initialized with `Zero-3` despite being configured." # ): # _ = Trainer( # model=model, # args=training_args, # ) # # Now do it properly, triggered from our `TrainingArguments` earlier # model = AutoModel.from_pretrained(T5_TINY) # trainer = Trainer( # model=model, # args=training_args, # ) # assert trainer.is_deepspeed_enabled # assert model._transformers_zero3_init_used def check_saved_checkpoints_deepspeed(self, output_dir, freq, total, stage, dtype): # adapted from TrainerIntegrationCommon.check_saved_checkpoints file_list = [SAFE_WEIGHTS_NAME, "training_args.bin", "trainer_state.json", "config.json"] if stage == ZERO2: ds_file_list = ["mp_rank_00_model_states.pt"] elif stage == ZERO3: ds_file_list = ["zero_pp_rank_0_mp_rank_00_model_states.pt"] else: raise ValueError(f"unknown stage {stage}") if dtype == "bf16": ds_file_list.append("bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt") for step in range(freq, total, freq): checkpoint = os.path.join(output_dir, f"checkpoint-{step}") self.assertTrue(os.path.isdir(checkpoint), f"[{stage}] {checkpoint} dir is not found") # common files for filename in file_list: path = os.path.join(checkpoint, filename) self.assertTrue(os.path.isfile(path), f"[{stage}] {path} is not found") # ds files ds_path = os.path.join(checkpoint, f"global_step{step}") for filename in ds_file_list: # filename = os.path.join(path, filename) # print(filename) path = os.path.join(ds_path, filename) self.assertTrue(os.path.isfile(path), f"[{stage}] {path} is not found") @parameterized.expand(params, name_func=parameterized_custom_name_func) def test_save_checkpoints(self, stage, dtype): # adapted from TrainerIntegrationTest.test_save_checkpoints freq = 5 output_dir = self.get_auto_remove_tmp_dir() ds_config_dict = self.get_config_dict(stage) if dtype == FP16: ds_config_dict["fp16"]["initial_scale_power"] = 1 # force optimizer on the first step # XXX: if stage == ZERO3: ds_config_dict["zero_optimization"]["stage3_gather_16bit_weights_on_model_save"] = True # save checkpoints with mockenv_context(**self.dist_env_1_gpu): kwargs = { "output_dir": output_dir, "save_steps": freq, "deepspeed": ds_config_dict, } kwargs[dtype] = True trainer = get_regression_trainer(**kwargs) trainer.train() total = int(self.n_epochs * 64 / self.batch_size) self.check_saved_checkpoints_deepspeed(output_dir, freq, total, stage, dtype) @parameterized.expand(params, name_func=parameterized_custom_name_func) def test_can_resume_training_errors(self, stage, dtype): with mockenv_context(**self.dist_env_1_gpu): ds_config_dict = self.get_config_dict(stage) output_dir = self.get_auto_remove_tmp_dir() kwargs = {"output_dir": output_dir, "deepspeed": ds_config_dict} kwargs[dtype] = True trainer = get_regression_trainer(**kwargs) # 1. fail to find any checkpoint - due a fresh output_dir with self.assertRaises(Exception) as context: trainer.train(resume_from_checkpoint=True) self.assertTrue( "No valid checkpoint found in output directory" in str(context.exception), f"got exception: {context.exception}", ) # 2. fail to find a bogus checkpoint with self.assertRaises(Exception) as context: checkpoint = os.path.join(output_dir, "checkpoint-5") trainer.train(resume_from_checkpoint=f"{checkpoint}-bogus") @parameterized.expand(params_with_optims_and_schedulers, name_func=parameterized_custom_name_func) def test_can_resume_training_normal(self, stage, dtype, optim, scheduler): # adapted from TrainerIntegrationTest.test_can_resume_training # test normal resume for each stage separately, error-handling is tested in a different test # ToDo: Currently, hf_optim + hf_scheduler resumes with the correct states and # also has same losses for few steps but then slowly diverges. Need to figure it out. if optim == HF_OPTIM and scheduler == HF_SCHEDULER: self.skipTest(reason="hf_optim + hf_scheduler resumes with the correct states but slowly diverges") output_dir = self.get_auto_remove_tmp_dir("./xxx", after=False) ds_config_dict = self.get_config_dict(stage) if dtype == FP16: ds_config_dict["fp16"]["initial_scale_power"] = 1 # force optimizer on the first step # XXX: if stage == ZERO3: ds_config_dict["zero_optimization"]["stage3_gather_16bit_weights_on_model_save"] = True if optim == HF_OPTIM: del ds_config_dict["optimizer"] if scheduler == HF_SCHEDULER: del ds_config_dict["scheduler"] kwargs = { "output_dir": output_dir, "train_len": 128, "save_steps": 5, "learning_rate": 0.1, "deepspeed": ds_config_dict, } kwargs[dtype] = True with mockenv_context(**self.dist_env_1_gpu): trainer = get_regression_trainer(**kwargs) trainer.train() (a, b) = trainer.model.a.item(), trainer.model.b.item() state = dataclasses.asdict(trainer.state) checkpoint = os.path.join(output_dir, "checkpoint-5") # Reinitialize trainer trainer = get_regression_trainer(**kwargs) trainer.train(resume_from_checkpoint=checkpoint) (a1, b1) = trainer.model.a.item(), trainer.model.b.item() state1 = dataclasses.asdict(trainer.state) self.assertEqual(a, a1) self.assertEqual(b, b1) self.check_trainer_state_are_the_same(state, state1) # Now check with a later checkpoint that it also works when we span over one epoch checkpoint = os.path.join(output_dir, "checkpoint-15") # Reinitialize trainer and load model trainer = get_regression_trainer(**kwargs) trainer.train(resume_from_checkpoint=checkpoint) (a1, b1) = trainer.model.a.item(), trainer.model.b.item() state1 = dataclasses.asdict(trainer.state) self.assertEqual(a, a1) self.assertEqual(b, b1) self.check_trainer_state_are_the_same(state, state1) # Finally, should be able to resume with the same trainer/same deepspeed engine instance # XXX: but currently this not possible due DS bug: https://github.com/deepspeedai/DeepSpeed/issues/1612 # trainer.train(resume_from_checkpoint=checkpoint) # a workaround needs to be used that re-creates the deepspeed engine @parameterized.expand(params, name_func=parameterized_custom_name_func) def test_load_state_dict_from_zero_checkpoint(self, stage, dtype): # test that we can load fp32 weights directly from the zero checkpoint into the current model output_dir = self.get_auto_remove_tmp_dir() # "./xxx", after=False, before=False) ds_config_dict = self.get_config_dict(stage) kwargs = { "output_dir": output_dir, "train_len": 4, "per_device_train_batch_size": 4, "num_train_epochs": 1, "save_strategy": "steps", "save_steps": 1, "learning_rate": 0.1, "deepspeed": ds_config_dict, } kwargs[dtype] = True with mockenv_context(**self.dist_env_1_gpu): trainer = get_regression_trainer(**kwargs) trainer.train() (a, b) = trainer.model.a.item(), trainer.model.b.item() state = dataclasses.asdict(trainer.state) checkpoint_dir = get_last_checkpoint(output_dir) model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir) (a1, b1) = model.a.item(), model.b.item() state1 = dataclasses.asdict(trainer.state) self.assertEqual(a, a1) self.assertEqual(b, b1) self.check_trainer_state_are_the_same(state, state1) def test_ds_config_object(self): # test that we can switch from zero2 to zero3 in the same process for example # test is_zero, etc. output_dir = self.get_auto_remove_tmp_dir() kwargs = {"output_dir": output_dir, "train_len": 8, "fp16": True} ds_config_zero3_dict = self.get_config_dict(ZERO3) ds_config_zero2_dict = self.get_config_dict(ZERO2) with mockenv_context(**self.dist_env_1_gpu): trainer = get_regression_trainer(deepspeed=ds_config_zero3_dict, **kwargs) self.assertTrue(is_deepspeed_zero3_enabled()) # test we can repeat that and with train this time trainer = get_regression_trainer(deepspeed=ds_config_zero3_dict, **kwargs) trainer.train() self.assertTrue(is_deepspeed_zero3_enabled()) # test zero3 is disabled trainer = get_regression_trainer(deepspeed=ds_config_zero2_dict, **kwargs) self.assertFalse(is_deepspeed_zero3_enabled()) # check config obj config = deepspeed_config() self.assertTrue(bool(config), "Deepspeed config should be accessible") # with accelerate integration below line is additionally required for this test to pass trainer.accelerator.state._reset_state() del trainer # now weakref should gc the global and we shouldn't get anything here config = deepspeed_config() self.assertFalse(is_deepspeed_zero3_enabled()) self.assertFalse(bool(config), "Deepspeed config should not be accessible") @parameterized.expand(params, name_func=parameterized_custom_name_func) def test_load_best_model(self, stage, dtype): # Test that forced deepspeed reinit doesn't break the model. the forced re-init after # loading the best model in Trainer is there to workaround this bug in Deepspeed # https://github.com/deepspeedai/DeepSpeed/issues/1612 # # The test is derived from a repro script submitted in this Issue: # https://github.com/huggingface/transformers/issues/17114 # # One additional feature of this test is that we use a non-AdamW optimizer to test that # deepspeed doesn't fallback to AdamW, which would prevent the optimizer states from loading # correctly from transformers import T5ForConditionalGeneration, T5Tokenizer, Trainer # noqa output_dir = self.get_auto_remove_tmp_dir() # "./xxx", after=False, before=False) ds_config_dict = self.get_config_dict(stage) del ds_config_dict["optimizer"] # will use HF Trainer optimizer del ds_config_dict["scheduler"] # will use HF Trainer scheduler ds_config_dict["zero_force_ds_cpu_optimizer"] = False # offload is not efficient w/o CPUAdam # must use this setting to get the reload path exercised ds_config_dict["zero_optimization"]["stage3_gather_16bit_weights_on_model_save"] = True with mockenv_context(**self.dist_env_1_gpu): args_dict = { "per_device_train_batch_size": 1, "per_device_eval_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-4, "num_train_epochs": 1, "do_train": True, "do_eval": True, "optim": "adafactor", "eval_strategy": "steps", "eval_steps": 1, "save_strategy": "steps", "save_steps": 1, "load_best_model_at_end": True, "max_steps": 1, "deepspeed": ds_config_dict, "report_to": "none", } training_args = TrainingArguments(output_dir, **args_dict) tokenizer = T5Tokenizer.from_pretrained(T5_TINY) model = T5ForConditionalGeneration.from_pretrained(T5_TINY) def _add_eos_to_examples(example): example["input_text"] = f"question: {example['question']} context: {example['context']}" example["target_text"] = example["answers"]["text"][0] if len(example["answers"]["text"]) > 0 else "" return example def _convert_to_features(example_batch): input_encodings = tokenizer.batch_encode_plus( example_batch["input_text"], padding="max_length", max_length=512, truncation=True ) target_encodings = tokenizer.batch_encode_plus( example_batch["target_text"], padding="max_length", max_length=16, truncation=True ) encodings = { "input_ids": input_encodings["input_ids"], "attention_mask": input_encodings["attention_mask"], "labels": target_encodings["input_ids"], } return encodings def get_dataset(): data_file = str(self.tests_dir / "fixtures/tests_samples/SQUAD/sample.json") data_files = {"train": data_file, "validation": data_file} raw_datasets = datasets.load_dataset("json", data_files=data_files, field="data") train_dataset = raw_datasets["train"].map(_add_eos_to_examples).map(_convert_to_features, batched=True) valid_dataset = deepcopy(train_dataset) return train_dataset, valid_dataset train_dataset, eval_dataset = get_dataset() trainer = Trainer( model=model, processing_class=tokenizer, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, ) trainer.train() # crash 1 was here trainer.evaluate() # crash 2 was here @slow @run_first @require_deepspeed @require_torch_accelerator class TestDeepSpeedWithLauncher(TestCasePlus): """This class is for testing via an external script - can do multiple gpus""" # Tests to devise # # # 1. predict_with_generate on multigpu - need to figure out how to give input sequences so that # the 2 gpus will generate prediction sequences that aren't of the same length - this is because # we had to code a special feature to sync the gpus when the predicted sequences aren't of the # same length. In general this will tested as a side-effect through a variety of other tests - # it'll simply hang trying to synchronize with other gpus if this problem is encountered. So as # long as we have a few full tests running on zero3 + predict_with_generate this should be # mostly covered. # # but there are 5 variations on beam search in `generate`- with identical code branched with `if # synced_gpus` # # 2. most tests should probably be run on both: zero2 and zero3 configs # @parameterized.expand(params, name_func=parameterized_custom_name_func) @require_torch_multi_accelerator @run_first def test_basic_distributed(self, stage, dtype): self.run_and_check(stage=stage, dtype=dtype, distributed=True) @require_torch_fp16 @run_first def test_do_eval_no_train(self): # testing only zero3 since zero2 makes no sense with inference self.run_and_check( stage=ZERO3, dtype=FP16, eval_steps=1, distributed=False, do_train=False, do_eval=True, ) @parameterized.expand(params, name_func=parameterized_custom_name_func) @run_first def test_fp32_non_distributed(self, stage, dtype): # real model needs too much GPU memory under stage2+fp32, so using tiny random model here - # therefore no quality checks, just basic completion checks are done self.run_and_check( stage=stage, dtype=dtype, model_name=T5_TINY, distributed=False, do_train=True, do_eval=True, quality_checks=False, fp32=True, ) @parameterized.expand(params, name_func=parameterized_custom_name_func) @require_torch_multi_accelerator @run_first def test_fp32_distributed(self, stage, dtype): # real model needs too much GPU memory under stage2+fp32, so using tiny random model here - # therefore no quality checks, just basic completion checks are done self.run_and_check( stage=stage, dtype=dtype, model_name=T5_TINY, distributed=True, do_train=True, do_eval=True, quality_checks=False, fp32=True, ) @parameterized.expand(params, name_func=parameterized_custom_name_func) @run_first def test_resume_train_not_from_ds_checkpoint(self, stage, dtype): # do normal training and then resume not from the deepspeed checkpoint but explicitly from # the saved model dir do_train = True do_eval = False kwargs = { "stage": stage, "dtype": dtype, "eval_steps": 1, "distributed": True, "do_train": do_train, "do_eval": do_eval, } # 1. normal training output_dir = self.run_and_check(**kwargs) # 2. now resume explicitly from the saved weights, by passing --model_name_or_path output_dir # - i.e. the same path the model was saved to in step 1 output_dir = self.run_trainer(**kwargs, model_name=output_dir) self.do_checks(output_dir, do_train=do_train, do_eval=do_eval) @parameterized.expand(["bf16", "fp16", "fp32"]) @require_torch_multi_accelerator @run_first def test_inference(self, dtype): if dtype == "bf16" and not is_torch_bf16_available_on_device(torch_device): self.skipTest(reason="test requires bfloat16 hardware support") if dtype == "fp16" and not is_torch_fp16_available_on_device(torch_device): self.skipTest(reason="test requires fp16 hardware support") # this is just inference, so no optimizer should be loaded # it only works for z3 (makes no sense with z1-z2) fp32 = dtype == "fp32" self.run_and_check( stage=ZERO3, dtype=dtype, model_name=T5_TINY, distributed=True, do_train=False, do_eval=True, quality_checks=False, fp32=fp32, ) def do_checks(self, output_dir, do_train=True, do_eval=True, quality_checks=True): if do_train: train_metrics = load_json(os.path.join(output_dir, "train_results.json")) self.assertIn("train_samples_per_second", train_metrics) if quality_checks: self.assertGreater(train_metrics["train_samples_per_second"], 0.5) if do_eval: eval_metrics = load_json(os.path.join(output_dir, "eval_results.json")) self.assertIn("eval_bleu", eval_metrics) if quality_checks: self.assertGreater(eval_metrics["eval_bleu"], 1) # XXX: need to do better validation beyond just that the run was successful def run_and_check( self, stage, dtype, model_name: str = T5_SMALL, eval_steps: int = 10, distributed: bool = True, do_train: bool = True, do_eval: bool = True, quality_checks: bool = True, fp32: bool = False, extra_args_str: Optional[str] = None, remove_args_str: Optional[str] = None, ): # we are doing quality testing so using a small real model output_dir = self.run_trainer( stage=stage, dtype=dtype, model_name=model_name, eval_steps=eval_steps, num_train_epochs=1, do_train=do_train, do_eval=do_eval, distributed=distributed, fp32=fp32, extra_args_str=extra_args_str, remove_args_str=remove_args_str, ) self.do_checks(output_dir, do_train=do_train, do_eval=do_eval, quality_checks=quality_checks) return output_dir def run_trainer( self, stage: str, dtype: str, model_name: str, eval_steps: int = 10, num_train_epochs: int = 1, do_train: bool = False, do_eval: bool = True, distributed: bool = True, fp32: bool = False, extra_args_str: Optional[str] = None, remove_args_str: Optional[str] = None, ): max_len = 32 data_dir = self.test_file_dir / "../fixtures/tests_samples/wmt_en_ro" output_dir = self.get_auto_remove_tmp_dir() args = f""" --model_name_or_path {model_name} --train_file {data_dir}/train.json --validation_file {data_dir}/val.json --output_dir {output_dir} --overwrite_output_dir --max_source_length {max_len} --max_target_length {max_len} --val_max_target_length {max_len} --warmup_steps 8 --predict_with_generate --save_steps 0 --eval_steps {eval_steps} --group_by_length --label_smoothing_factor 0.1 --source_lang en --target_lang ro --report_to none """.split() args.extend(["--source_prefix", '"translate English to Romanian: "']) if not fp32: args.extend([f"--{dtype}"]) actions = 0 if do_train: actions += 1 args.extend( f""" --do_train --num_train_epochs {str(num_train_epochs)} --max_train_samples 16 --per_device_train_batch_size 2 --learning_rate 3e-3 """.split() ) if do_eval: actions += 1 args.extend(["--do_eval", "--max_eval_samples", "16", "--per_device_eval_batch_size", "2"]) assert actions > 0, "need at least do_train or do_eval for the test to run" if extra_args_str is not None: args.extend(extra_args_str.split()) # currently only works for bool args if remove_args_str is not None: remove_args = remove_args_str.split() args = [x for x in args if x not in remove_args] ds_args = f"--deepspeed {self.test_file_dir_str}/ds_config_{stage}.json".split() script = [f"{self.examples_dir_str}/pytorch/translation/run_translation.py"] launcher = get_launcher(distributed) cmd = launcher + script + args + ds_args # keep for quick debug # print(" ".join([f"\nPYTHONPATH={self.src_dir_str}"] +cmd)); die execute_subprocess_async(cmd, env=self.get_env()) return output_dir @parameterized.expand(params, name_func=parameterized_custom_name_func) @run_first def test_clm(self, stage, dtype): # this test exercises model.resize_token_embeddings() which requires param gathering outside # of forward - it's not used by `run_translation.py`, but it is in `run_clm.py` data_dir = self.tests_dir / "fixtures" output_dir = self.get_auto_remove_tmp_dir() args = f""" --model_name_or_path {GPT2_TINY} --train_file {data_dir}/sample_text.txt --validation_file {data_dir}/sample_text.txt --output_dir {output_dir} --overwrite_output_dir --do_train --do_eval --max_train_samples 16 --max_eval_samples 16 --per_device_train_batch_size 2 --per_device_eval_batch_size 2 --num_train_epochs 1 --warmup_steps 8 --block_size 64 --report_to none """.split() args.extend([f"--{dtype}"]) ds_args = f"--deepspeed {self.test_file_dir_str}/ds_config_{stage}.json".split() script = [f"{self.examples_dir_str}/pytorch/language-modeling/run_clm.py"] launcher = get_launcher(distributed=True) cmd = launcher + script + args + ds_args # keep for quick debug # print(" ".join([f"\nPYTHONPATH={self.src_dir_str}"] +cmd)); die execute_subprocess_async(cmd, env=self.get_env()) @require_torch_fp16 @run_first def test_clm_from_config_zero3_fp16(self): # this test exercises AutoModel.from_config(config) - to ensure zero.Init is called data_dir = self.tests_dir / "fixtures" output_dir = self.get_auto_remove_tmp_dir() args = f""" --model_type gpt2 --tokenizer_name {GPT2_TINY} --train_file {data_dir}/sample_text.txt --validation_file {data_dir}/sample_text.txt --output_dir {output_dir} --overwrite_output_dir --do_train --max_train_samples 4 --per_device_train_batch_size 2 --num_train_epochs 1 --warmup_steps 8 --block_size 8 --fp16 --report_to none """.split() ds_args = f"--deepspeed {self.test_file_dir_str}/ds_config_zero3.json".split() script = [f"{self.examples_dir_str}/pytorch/language-modeling/run_clm.py"] launcher = get_launcher(distributed=True) cmd = launcher + script + args + ds_args # keep for quick debug # print(" ".join([f"\nPYTHONPATH={self.src_dir_str}"] +cmd)); die with CaptureStderr() as cs: execute_subprocess_async(cmd, env=self.get_env()) self.assertIn("Detected DeepSpeed ZeRO-3", cs.err)
transformers/tests/deepspeed/test_deepspeed.py/0
{ "file_path": "transformers/tests/deepspeed/test_deepspeed.py", "repo_id": "transformers", "token_count": 27406 }
568
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import itertools import os import subprocess import unittest from copy import deepcopy from functools import partial from parameterized import parameterized import tests.trainer.test_trainer from tests.trainer.test_trainer import TrainerIntegrationCommon # noqa from transformers import is_torch_available from transformers.testing_utils import ( TestCasePlus, backend_device_count, execute_subprocess_async, mockenv_context, require_accelerate, require_fsdp, require_torch_accelerator, require_torch_multi_accelerator, run_first, slow, torch_device, ) from transformers.trainer_callback import TrainerState from transformers.trainer_utils import FSDPOption, set_seed from transformers.utils import ( is_accelerate_available, is_torch_bf16_available_on_device, is_torch_fp16_available_on_device, ) if is_torch_available(): from transformers.trainer import FSDP_MODEL_NAME # default torch.distributed port DEFAULT_MASTER_PORT = "10999" dtypes = [] if is_torch_bf16_available_on_device(torch_device): dtypes += ["bf16"] if is_torch_fp16_available_on_device(torch_device): dtypes += ["fp16"] sharding_strategies = ["full_shard", "shard_grad_op"] state_dict_types = ["FULL_STATE_DICT", "SHARDED_STATE_DICT"] params = list(itertools.product(sharding_strategies, dtypes)) set_seed(42) def get_master_port(real_launcher=False): """ When using a single gpu launcher emulation (i.e. not deepspeed or python -m torch.distributed) the issue is that once the port is tied it can't be used anywhere else outside of this process, since torch.dist doesn't free the port until the process exits. Therefore for the sake of being able to run both emulated launcher and normal launcher tests we need 2 distinct ports. This function will give the right port in the right context. For real launcher it'll give the base port, for emulated launcher it'll give the base port + 1. In both cases a string is returned. Args: `real_launcher`: whether a real launcher is going to be used, or the emulated one """ master_port_base = os.environ.get("DS_TEST_PORT", DEFAULT_MASTER_PORT) if not real_launcher: master_port_base = str(int(master_port_base) + 1) return master_port_base if is_torch_available(): from tests.trainer.test_trainer import ( # noqa RegressionModelConfig, RegressionPreTrainedModel, ) # hack to restore original logging level pre #21700 get_regression_trainer = partial(tests.trainer.test_trainer.get_regression_trainer, log_level="info") require_fsdp_version = require_fsdp if is_accelerate_available(): from accelerate.utils.constants import ( FSDP_PYTORCH_VERSION, FSDP_SHARDING_STRATEGY, ) require_fsdp_version = partial(require_fsdp, min_version=FSDP_PYTORCH_VERSION) FSDP2_ACCELERATE_VERSION = "1.6.0" require_accelerate_fsdp2 = partial(require_accelerate, min_version=FSDP2_ACCELERATE_VERSION) require_fsdp_v2_version = require_fsdp if is_accelerate_available(min_version=FSDP2_ACCELERATE_VERSION): from accelerate.utils.constants import FSDP2_PYTORCH_VERSION require_fsdp_v2_version = partial(require_fsdp, min_version=FSDP2_PYTORCH_VERSION) def get_launcher(distributed=False, use_accelerate=False): # 1. explicitly set --num_nodes=1 just in case these tests end up run on a multi-node setup # - it won't be able to handle that # 2. for now testing with just 2 gpus max (since some quality tests may give different # results with mode gpus because we use very little data) num_gpus = min(2, backend_device_count(torch_device)) if distributed else 1 master_port = get_master_port(real_launcher=True) if use_accelerate: return f"""accelerate launch --num_processes {num_gpus} --main_process_port {master_port} --use_fsdp --fsdp_auto_wrap_policy TRANSFORMER_BASED_WRAP --fsdp_state_dict_type SHARDED_STATE_DICT --fsdp_transformer_layer_cls_to_wrap BertLayer""".split() return f"torchrun --nnodes 1 --nproc-per-node {num_gpus} --master-port {master_port}".split() def _parameterized_custom_name_func(func, param_num, param): # customize the test name generator function as we want both params to appear in the sub-test # name, as by default it shows only the first param param_based_name = parameterized.to_safe_name("_".join(str(x) for x in param.args)) return f"{func.__name__}_{param_based_name}" @require_accelerate @require_torch_accelerator @require_fsdp_version class TrainerIntegrationFSDP(TestCasePlus, TrainerIntegrationCommon): def setUp(self): super().setUp() master_port = get_master_port(real_launcher=False) self.dist_env_1_gpu = { "MASTER_ADDR": "localhost", "MASTER_PORT": master_port, "RANK": "0", "LOCAL_RANK": "0", "WORLD_SIZE": "1", } self.accelerate_fsdp_config = { "fsdp_activation_checkpointing": False, "fsdp_auto_wrap_policy": "TRANSFORMER_BASED_WRAP", "fsdp_backward_prefetch": "BACKWARD_PRE", "fsdp_cpu_ram_efficient_loading": True, "fsdp_forward_prefetch": False, "fsdp_offload_params": False, "fsdp_reshard_after_forward": "FULL_SHARD", "fsdp_state_dict_type": "FULL_STATE_DICT", "fsdp_sync_module_states": True, "fsdp_transformer_layer_cls_to_wrap": "LlamaDecoderLayer", "fsdp_use_orig_params": True, "fsdp_version": 1, } self.fsdp_config = { "backward_prefetch": "BACKWARD_PRE", "forward_prefetch": "false", "limit_all_gathers": "false", "use_orig_params": "true", "sync_module_states": "true", "cpu_ram_efficient_loading": "true", "activation_checkpointing": "false", "min_num_params": 1, } def tearDown(self): super().tearDown() @parameterized.expand(params, name_func=_parameterized_custom_name_func) def test_accelerate_fsdp_config(self, sharding_strategy, dtype): output_dir = self.get_auto_remove_tmp_dir() kwargs = { "output_dir": output_dir, "train_len": 128, "save_steps": 5, "learning_rate": 0.1, "fsdp": f"{sharding_strategy} offload auto_wrap", "fsdp_config": self.accelerate_fsdp_config, } kwargs[dtype] = True with mockenv_context(**self.dist_env_1_gpu): trainer = get_regression_trainer(**kwargs) self.assertEqual(trainer.args.fsdp[0], sharding_strategy) self.assertEqual(trainer.args.fsdp[1], FSDPOption.OFFLOAD) self.assertEqual(trainer.args.fsdp[2], FSDPOption.AUTO_WRAP) for k, v in trainer.args.fsdp_config.items(): self.assertTrue(k in self.accelerate_fsdp_config) self.assertEqual(v, self.accelerate_fsdp_config[k]) self.assertEqual(os.environ.get("ACCELERATE_USE_FSDP", "false"), "true") @parameterized.expand(params, name_func=_parameterized_custom_name_func) def test_fsdp_config(self, sharding_strategy, dtype): output_dir = self.get_auto_remove_tmp_dir() kwargs = { "output_dir": output_dir, "train_len": 128, "save_steps": 5, "learning_rate": 0.1, "fsdp": f"{sharding_strategy} offload auto_wrap", "fsdp_config": self.fsdp_config, } kwargs[dtype] = True with mockenv_context(**self.dist_env_1_gpu): trainer = get_regression_trainer(**kwargs) self.assertEqual(trainer.args.fsdp[0], sharding_strategy) self.assertEqual(trainer.args.fsdp[1], FSDPOption.OFFLOAD) self.assertEqual(trainer.args.fsdp[2], FSDPOption.AUTO_WRAP) for k, v in trainer.args.fsdp_config.items(): self.assertEqual(v, self.fsdp_config[k]) self.assertEqual(os.environ.get("ACCELERATE_USE_FSDP", "false"), "true") @parameterized.expand(params, name_func=_parameterized_custom_name_func) def test_fsdp_config_transformers_auto_wrap(self, sharding_strategy, dtype): output_dir = self.get_auto_remove_tmp_dir() fsdp_config = deepcopy(self.fsdp_config) del fsdp_config["min_num_params"] fsdp_config["transformer_layer_cls_to_wrap"] = "BertLayer" kwargs = { "output_dir": output_dir, "train_len": 128, "save_steps": 5, "learning_rate": 0.1, "fsdp": f"{sharding_strategy} offload auto_wrap", "fsdp_config": fsdp_config, } kwargs[dtype] = True prefix = "FSDP_" with mockenv_context(**self.dist_env_1_gpu): trainer = get_regression_trainer(**kwargs) self.assertEqual(trainer.args.fsdp[0], sharding_strategy) self.assertEqual(trainer.args.fsdp[1], FSDPOption.OFFLOAD) self.assertEqual(trainer.args.fsdp[2], FSDPOption.AUTO_WRAP) fsdp_sharding_strategy = str(FSDP_SHARDING_STRATEGY.index(sharding_strategy.upper()) + 1) self.assertEqual(os.environ[f"{prefix}SHARDING_STRATEGY"], fsdp_sharding_strategy) self.assertEqual(os.environ[f"{prefix}OFFLOAD_PARAMS"], "true") self.assertEqual(os.environ[f"{prefix}AUTO_WRAP_POLICY"], "TRANSFORMER_BASED_WRAP") self.assertEqual( os.environ[f"{prefix}TRANSFORMER_CLS_TO_WRAP"], ",".join(fsdp_config["transformer_layer_cls_to_wrap"]) ) self.assertEqual(os.environ[f"{prefix}BACKWARD_PREFETCH"], fsdp_config["backward_prefetch"]) self.assertEqual(os.environ[f"{prefix}FORWARD_PREFETCH"], fsdp_config["forward_prefetch"]) self.assertEqual(os.environ[f"{prefix}USE_ORIG_PARAMS"], fsdp_config["use_orig_params"]) self.assertEqual(os.environ[f"{prefix}SYNC_MODULE_STATES"], fsdp_config["sync_module_states"]) self.assertEqual( os.environ[f"{prefix}CPU_RAM_EFFICIENT_LOADING"], fsdp_config["cpu_ram_efficient_loading"] ) self.assertEqual(os.environ.get("ACCELERATE_USE_FSDP", "false"), "true") @parameterized.expand(params, name_func=_parameterized_custom_name_func) @require_torch_multi_accelerator @run_first @slow def test_basic_run(self, sharding_strategy, dtype): launcher = get_launcher(distributed=True, use_accelerate=False) output_dir = self.get_auto_remove_tmp_dir() args = self.get_base_args(output_dir, 1, 50).split() + [f"--{dtype}"] fsdp_args = ["--fsdp", f"{sharding_strategy} auto_wrap", "--fsdp_transformer_layer_cls_to_wrap", "BertLayer"] script = [f"{self.examples_dir_str}/pytorch/text-classification/run_glue.py"] cmd = launcher + script + args + fsdp_args execute_subprocess_async(cmd, env=self.get_env()) @parameterized.expand(params, name_func=_parameterized_custom_name_func) @require_torch_multi_accelerator @run_first @slow def test_basic_run_with_gradient_accumulation(self, sharding_strategy, dtype): launcher = get_launcher(distributed=True, use_accelerate=False) output_dir = self.get_auto_remove_tmp_dir() args = self.get_base_args(output_dir, 1, 50).split() + [f"--{dtype}", "--gradient_accumulation_steps", "2"] fsdp_args = ["--fsdp", f"{sharding_strategy} auto_wrap", "--fsdp_transformer_layer_cls_to_wrap", "BertLayer"] script = [f"{self.examples_dir_str}/pytorch/text-classification/run_glue.py"] cmd = launcher + script + args + fsdp_args execute_subprocess_async(cmd, env=self.get_env()) @parameterized.expand(dtypes) @require_torch_multi_accelerator @run_first @slow def test_basic_run_with_cpu_offload(self, dtype): launcher = get_launcher(distributed=True, use_accelerate=False) output_dir = self.get_auto_remove_tmp_dir() args = self.get_base_args(output_dir, 1, 50).split() + [f"--{dtype}", "--max_steps", "10"] fsdp_args = ["--fsdp", "full_shard auto_wrap offload", "--fsdp_transformer_layer_cls_to_wrap", "BertLayer"] script = [f"{self.examples_dir_str}/pytorch/text-classification/run_glue.py"] cmd = launcher + script + args + fsdp_args execute_subprocess_async(cmd, env=self.get_env()) @parameterized.expand(state_dict_types, name_func=_parameterized_custom_name_func) @require_torch_multi_accelerator @run_first @slow def test_training_and_can_resume_normally(self, state_dict_type): output_dir = self.get_auto_remove_tmp_dir("./xxx", after=False) sharding_strategy = "full_shard" use_accelerate = state_dict_type == "SHARDED_STATE_DICT" launcher = get_launcher(True, use_accelerate=use_accelerate) args = self.get_base_args(output_dir, 2, 25).split() script = [f"{self.examples_dir_str}/pytorch/text-classification/run_glue.py"] logs = self.run_cmd_and_get_logs(use_accelerate, sharding_strategy, launcher, script, args, output_dir) # resume from ckpt checkpoint = os.path.join(output_dir, "checkpoint-115") resume_args = args + f"--resume_from_checkpoint {checkpoint}".split() is_fsdp_ckpt = os.path.isdir(checkpoint) and ( # this checks the FSDP state dict when `SHARDED_STATE_DICT` is used any( FSDP_MODEL_NAME in folder_name for folder_name in os.listdir(checkpoint) if os.path.isdir(os.path.join(checkpoint, folder_name)) ) # this checks the FSDP state dict when `FULL_STATE_DICT` is used or os.path.isfile(os.path.join(checkpoint, f"{FSDP_MODEL_NAME}.bin")) ) self.assertTrue(is_fsdp_ckpt) logs_resume = self.run_cmd_and_get_logs( use_accelerate, sharding_strategy, launcher, script, resume_args, output_dir ) for log, log1 in zip(logs, logs_resume): if "learning_rate" in log: self.assertAlmostEqual(log["learning_rate"], log1["learning_rate"], delta=1e-5) @require_torch_multi_accelerator @run_first @slow def test_fsdp_cpu_offloading(self): # TODO: This file is missing and should be added or the test should be removed if not os.path.exists("utils/testing_scripts/fsdp_cpu_offloading.py"): raise unittest.SkipTest("FSDP CPU offloading script not found!") try: subprocess.run( "accelerate launch utils/testing_scripts/fsdp_cpu_offloading.py --config utils/testing_scripts/dummy_fsdp_config.yml", shell=True, check=True, ) except: # noqa raise AssertionError("CPU offloading failed with FSDP!") @require_torch_multi_accelerator @run_first @slow @require_fsdp_v2_version @require_accelerate_fsdp2 def test_accelerate_fsdp2_integration(self): output_dir = self.get_auto_remove_tmp_dir("./xxx", after=False) sharding_strategy = "full_shard" use_accelerate = True num_gpus = min(2, backend_device_count(torch_device)) master_port = get_master_port(real_launcher=True) launcher = f"""accelerate launch --num_processes {num_gpus} --main_process_port {master_port} --use_fsdp --fsdp_version 2 --fsdp_auto_wrap_policy TRANSFORMER_BASED_WRAP --fsdp_state_dict_type SHARDED_STATE_DICT --fsdp_transformer_layer_cls_to_wrap BertLayer""".split() args = self.get_base_args(output_dir, 2, 25).split() script = [f"{self.examples_dir_str}/pytorch/text-classification/run_glue.py"] logs = self.run_cmd_and_get_logs(use_accelerate, sharding_strategy, launcher, script, args, output_dir) # resume from ckpt checkpoint = os.path.join(output_dir, "checkpoint-115") resume_args = args + f"--resume_from_checkpoint {checkpoint}".split() is_fsdp_ckpt = os.path.isdir(checkpoint) and ( # this checks the FSDP state dict when `SHARDED_STATE_DICT` is used any( FSDP_MODEL_NAME in folder_name for folder_name in os.listdir(checkpoint) if os.path.isdir(os.path.join(checkpoint, folder_name)) ) # this checks the FSDP state dict when `FULL_STATE_DICT` is used or os.path.isfile(os.path.join(checkpoint, f"{FSDP_MODEL_NAME}.bin")) ) self.assertTrue(is_fsdp_ckpt) logs_resume = self.run_cmd_and_get_logs( use_accelerate, sharding_strategy, launcher, script, resume_args, output_dir ) for log, log1 in zip(logs, logs_resume): if "learning_rate" in log: self.assertAlmostEqual(log["learning_rate"], log1["learning_rate"], delta=1e-5) @require_torch_multi_accelerator @run_first @slow @require_fsdp @require_fsdp_v2_version @require_accelerate_fsdp2 def test_fsdp2_cpu_offloading(self): # TODO: This file is missing and should be added or the test should be removed if not os.path.exists("utils/testing_scripts/fsdp_cpu_offloading.py"): raise unittest.SkipTest("FSDP 2 CPU offloading script not found!") try: subprocess.run( "accelerate launch --fsdp_version 2 utils/testing_scripts/fsdp_cpu_offloading.py --config utils/testing_scripts/dummy_fsdp_config.yml", shell=True, check=True, ) except: # noqa raise AssertionError("CPU offloading failed with FSDP!") def run_cmd_and_get_logs(self, use_accelerate, sharding_strategy, launcher, script, args, output_dir): if not use_accelerate: fsdp_args = [ "--fsdp", f"{sharding_strategy} auto_wrap", "--fsdp_transformer_layer_cls_to_wrap", "BertLayer", ] cmd = launcher + script + args + fsdp_args else: fsdp_config = f""" --fsdp_sharding_strategy {FSDP_SHARDING_STRATEGY.index(sharding_strategy.upper()) + 1} """.split() cmd = launcher + fsdp_config + script + args # keep for quick debug # print(" ".join([f"\nPYTHONPATH={self.src_dir_str}"] +cmd)); die execute_subprocess_async(cmd, env=self.get_env()) logs = TrainerState.load_from_json(os.path.join(output_dir, "trainer_state.json")).log_history return logs def get_base_args(self, output_dir, num_epochs, logging_steps): return f""" --model_name_or_path google-bert/bert-base-cased --task_name mrpc --output_dir {output_dir} --overwrite_output_dir --do_train --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs {num_epochs} --lr_scheduler_type cosine --logging_steps {logging_steps} --save_strategy epoch --do_eval --eval_strategy epoch --report_to none """
transformers/tests/fsdp/test_fsdp.py/0
{ "file_path": "transformers/tests/fsdp/test_fsdp.py", "repo_id": "transformers", "token_count": 9052 }
569
# Copyright 2022 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import itertools import os import random import tempfile import unittest import numpy as np from transformers import ASTFeatureExtractor from transformers.testing_utils import check_json_file_has_correct_format, require_torch, require_torchaudio from transformers.utils.import_utils import is_torch_available from ...test_sequence_feature_extraction_common import SequenceFeatureExtractionTestMixin global_rng = random.Random() if is_torch_available(): import torch # Copied from tests.models.whisper.test_feature_extraction_whisper.floats_list def floats_list(shape, scale=1.0, rng=None, name=None): """Creates a random float32 tensor""" if rng is None: rng = global_rng values = [] for batch_idx in range(shape[0]): values.append([]) for _ in range(shape[1]): values[-1].append(rng.random() * scale) return values class ASTFeatureExtractionTester: def __init__( self, parent, batch_size=7, min_seq_length=400, max_seq_length=2000, feature_size=1, padding_value=0.0, sampling_rate=16000, return_attention_mask=True, do_normalize=True, ): self.parent = parent self.batch_size = batch_size self.min_seq_length = min_seq_length self.max_seq_length = max_seq_length self.seq_length_diff = (self.max_seq_length - self.min_seq_length) // (self.batch_size - 1) self.feature_size = feature_size self.padding_value = padding_value self.sampling_rate = sampling_rate self.return_attention_mask = return_attention_mask self.do_normalize = do_normalize def prepare_feat_extract_dict(self): return { "feature_size": self.feature_size, "padding_value": self.padding_value, "sampling_rate": self.sampling_rate, "return_attention_mask": self.return_attention_mask, "do_normalize": self.do_normalize, } def prepare_inputs_for_common(self, equal_length=False, numpify=False): def _flatten(list_of_lists): return list(itertools.chain(*list_of_lists)) if equal_length: speech_inputs = floats_list((self.batch_size, self.max_seq_length)) else: # make sure that inputs increase in size speech_inputs = [ _flatten(floats_list((x, self.feature_size))) for x in range(self.min_seq_length, self.max_seq_length, self.seq_length_diff) ] if numpify: speech_inputs = [np.asarray(x) for x in speech_inputs] return speech_inputs @require_torch @require_torchaudio class ASTFeatureExtractionTest(SequenceFeatureExtractionTestMixin, unittest.TestCase): feature_extraction_class = ASTFeatureExtractor def setUp(self): self.feat_extract_tester = ASTFeatureExtractionTester(self) def test_call(self): # Tests that all call wrap to encode_plus and batch_encode_plus feat_extract = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict()) # create three inputs of length 800, 1000, and 1200 speech_inputs = [floats_list((1, x))[0] for x in range(800, 1400, 200)] np_speech_inputs = [np.asarray(speech_input) for speech_input in speech_inputs] # Test not batched input encoded_sequences_1 = feat_extract(speech_inputs[0], return_tensors="np").input_values encoded_sequences_2 = feat_extract(np_speech_inputs[0], return_tensors="np").input_values self.assertTrue(np.allclose(encoded_sequences_1, encoded_sequences_2, atol=1e-3)) # Test batched encoded_sequences_1 = feat_extract(speech_inputs, padding=True, return_tensors="np").input_values encoded_sequences_2 = feat_extract(np_speech_inputs, padding=True, return_tensors="np").input_values for enc_seq_1, enc_seq_2 in zip(encoded_sequences_1, encoded_sequences_2): self.assertTrue(np.allclose(enc_seq_1, enc_seq_2, atol=1e-3)) # Test 2-D numpy arrays are batched. speech_inputs = [floats_list((1, x))[0] for x in (800, 800, 800)] np_speech_inputs = np.asarray(speech_inputs) encoded_sequences_1 = feat_extract(speech_inputs, return_tensors="np").input_values encoded_sequences_2 = feat_extract(np_speech_inputs, return_tensors="np").input_values for enc_seq_1, enc_seq_2 in zip(encoded_sequences_1, encoded_sequences_2): self.assertTrue(np.allclose(enc_seq_1, enc_seq_2, atol=1e-3)) @require_torch def test_double_precision_pad(self): import torch feature_extractor = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict()) np_speech_inputs = np.random.rand(100).astype(np.float64) py_speech_inputs = np_speech_inputs.tolist() for inputs in [py_speech_inputs, np_speech_inputs]: np_processed = feature_extractor.pad([{"input_values": inputs}], return_tensors="np") self.assertTrue(np_processed.input_values.dtype == np.float32) pt_processed = feature_extractor.pad([{"input_values": inputs}], return_tensors="pt") self.assertTrue(pt_processed.input_values.dtype == torch.float32) def _load_datasamples(self, num_samples): from datasets import load_dataset ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") # automatic decoding with librispeech speech_samples = ds.sort("id")[:num_samples]["audio"] return [x["array"] for x in speech_samples] @require_torch def test_integration(self): # fmt: off EXPECTED_INPUT_VALUES = torch.tensor( [-0.9894, -1.2776, -0.9066, -1.2776, -0.9349, -1.2609, -1.0386, -1.2776, -1.1561, -1.2776, -1.2052, -1.2723, -1.2190, -1.2132, -1.2776, -1.1133, -1.1953, -1.1343, -1.1584, -1.2203, -1.1770, -1.2474, -1.2381, -1.1936, -0.9270, -0.8317, -0.8049, -0.7706, -0.7565, -0.7869] ) # fmt: on input_speech = self._load_datasamples(1) feature_extractor = ASTFeatureExtractor() input_values = feature_extractor(input_speech, return_tensors="pt").input_values self.assertEqual(input_values.shape, (1, 1024, 128)) torch.testing.assert_close(input_values[0, 0, :30], EXPECTED_INPUT_VALUES, rtol=1e-4, atol=1e-4) def test_feat_extract_from_and_save_pretrained(self): feat_extract_first = self.feature_extraction_class(**self.feat_extract_dict) with tempfile.TemporaryDirectory() as tmpdirname: saved_file = feat_extract_first.save_pretrained(tmpdirname)[0] check_json_file_has_correct_format(saved_file) feat_extract_second = self.feature_extraction_class.from_pretrained(tmpdirname) dict_first = feat_extract_first.to_dict() dict_second = feat_extract_second.to_dict() self.assertDictEqual(dict_first, dict_second) def test_feat_extract_to_json_file(self): feat_extract_first = self.feature_extraction_class(**self.feat_extract_dict) with tempfile.TemporaryDirectory() as tmpdirname: json_file_path = os.path.join(tmpdirname, "feat_extract.json") feat_extract_first.to_json_file(json_file_path) feat_extract_second = self.feature_extraction_class.from_json_file(json_file_path) dict_first = feat_extract_first.to_dict() dict_second = feat_extract_second.to_dict() self.assertEqual(dict_first, dict_second) # exact same tests than before, except that we simulate that torchaudio is not available @require_torch @unittest.mock.patch( "transformers.models.audio_spectrogram_transformer.feature_extraction_audio_spectrogram_transformer.is_speech_available", lambda: False, ) class ASTFeatureExtractionWithoutTorchaudioTest(ASTFeatureExtractionTest): def test_using_audio_utils(self): # Tests that it uses audio_utils instead of torchaudio feat_extract = self.feature_extraction_class(**self.feat_extract_tester.prepare_feat_extract_dict()) self.assertTrue(hasattr(feat_extract, "window")) self.assertTrue(hasattr(feat_extract, "mel_filters")) from transformers.models.audio_spectrogram_transformer.feature_extraction_audio_spectrogram_transformer import ( is_speech_available, ) self.assertFalse(is_speech_available())
transformers/tests/models/audio_spectrogram_transformer/test_feature_extraction_audio_spectrogram_transformer.py/0
{ "file_path": "transformers/tests/models/audio_spectrogram_transformer/test_feature_extraction_audio_spectrogram_transformer.py", "repo_id": "transformers", "token_count": 3852 }
570
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch Bamba model.""" import inspect import tempfile import unittest import pytest from pytest import mark from transformers import ( AutoTokenizer, BambaConfig, DataCollatorWithFlattening, is_torch_available, ) from transformers.testing_utils import ( DeviceProperties, Expectations, get_device_properties, require_deterministic_for_xpu, require_flash_attn, require_torch, require_torch_accelerator, require_torch_gpu, slow, torch_device, ) from ...generation.test_utils import GenerationTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, _config_zero_init, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from transformers import ( BambaForCausalLM, BambaModel, ) from transformers.models.bamba.modeling_bamba import ( HybridMambaAttentionDynamicCache, ) class BambaModelTester: config_class = BambaConfig if is_torch_available(): model_class = BambaModel for_causal_lm_class = BambaForCausalLM def __init__( self, parent, batch_size=13, seq_length=7, is_training=True, use_input_mask=True, use_labels=True, vocab_size=99, hidden_size=32, num_hidden_layers=4, num_attention_heads=4, num_key_value_heads=2, intermediate_size=64, hidden_act="silu", attention_dropout=0.0, attn_layer_indices=None, attn_rotary_emb=8, max_position_embeddings=512, type_vocab_size=16, initializer_range=0.02, num_labels=3, pad_token_id=0, mamba_n_groups=1, mamba_n_heads=16, mamba_d_state=16, mamba_d_conv=4, mamba_expand=2, mamba_chunk_size=16, scope=None, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_input_mask = use_input_mask self.use_labels = use_labels self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.num_key_value_heads = num_key_value_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.attention_dropout = attention_dropout self.attn_layer_indices = attn_layer_indices self.attn_rotary_emb = attn_rotary_emb self.max_position_embeddings = max_position_embeddings self.type_vocab_size = type_vocab_size self.initializer_range = initializer_range self.num_labels = num_labels self.pad_token_id = pad_token_id self.scope = scope self.mamba_n_groups = mamba_n_groups self.mamba_n_heads = mamba_n_heads self.mamba_d_state = mamba_d_state self.mamba_d_conv = mamba_d_conv self.mamba_expand = mamba_expand self.mamba_chunk_size = mamba_chunk_size def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) input_mask = None if self.use_input_mask: input_mask = torch.tril(torch.ones_like(input_ids).to(torch_device)) token_labels = None if self.use_labels: token_labels = ids_tensor([self.batch_size, self.seq_length], self.num_labels) self._update_layer_configs() config = self.get_config() return config, input_ids, input_mask, token_labels def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, input_ids, input_mask, token_labels, ) = config_and_inputs inputs_dict = {"input_ids": input_ids, "attention_mask": input_mask} return config, inputs_dict def _update_layer_configs(self): """Configures hidden layers and attn layer indices if they are not set.""" # Fix for SDPA tests, force at least 4 layers if self.num_hidden_layers < 4: self.num_hidden_layers = 4 if self.attn_layer_indices is None: d = [x for x in range(2, self.num_hidden_layers) if self.num_hidden_layers % x == 0] if len(d) == 0: raise ValueError("num_hidden_layers is prime, cannot automatically set attn_layer_indices.") d = d[-1] # get the largest divisor self.attn_layer_indices = [x + 1 for x in range(0, self.num_hidden_layers, d)] def get_config(self, **kwargs): return self.config_class( vocab_size=self.vocab_size, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, num_key_value_heads=self.num_key_value_heads, intermediate_size=self.intermediate_size, hidden_act=self.hidden_act, attention_dropout=self.attention_dropout, attn_layer_indices=self.attn_layer_indices, attn_rotary_emb=self.attn_rotary_emb, max_position_embeddings=self.max_position_embeddings, initializer_range=self.initializer_range, pad_token_id=self.pad_token_id, mamba_n_groups=self.mamba_n_groups, mamba_n_heads=self.mamba_n_heads, mamba_d_state=self.mamba_d_state, mamba_d_conv=self.mamba_d_conv, mamba_expand=self.mamba_expand, mamba_chunk_size=self.mamba_chunk_size, **kwargs, ) def create_and_check_model( self, config, input_ids, input_mask, token_labels, ): model = self.model_class(config=config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=input_mask) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) def create_and_check_for_causal_lm( self, config, input_ids, input_mask, token_labels, ): model = self.for_causal_lm_class(config=config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=input_mask, labels=token_labels) result = model(input_ids, attention_mask=input_mask) result = model(input_ids, labels=token_labels) result = model(input_ids) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) def create_and_check_decoder_model_past_large_inputs( self, config, input_ids, input_mask, token_labels, ): # config.is_decoder = True # config.add_cross_attention = True model = self.for_causal_lm_class(config=config) model.to(torch_device) model.eval() # first forward pass # Attention: Jamba needs the cache to be initialized to return a cache! past_key_values = HybridMambaAttentionDynamicCache( config, input_ids.shape[0], model.dtype, device=model.device ) outputs = model( input_ids, attention_mask=input_mask, past_key_values=past_key_values, use_cache=True, ) past_key_values = outputs.past_key_values # create hypothetical multiple next token and extent to next_input_ids next_tokens = ids_tensor((self.batch_size, 3), config.vocab_size) next_mask = ids_tensor((self.batch_size, 3), vocab_size=2) # append to next input_ids and next_input_ids = torch.cat([input_ids, next_tokens], dim=-1) next_attention_mask = torch.cat([input_mask, next_mask], dim=-1) output_from_no_past = model( next_input_ids, attention_mask=next_attention_mask, output_hidden_states=True, )["hidden_states"][0] output_from_past = model( next_tokens, attention_mask=next_attention_mask, past_key_values=past_key_values, output_hidden_states=True, cache_position=torch.arange( input_ids.shape[1], input_ids.shape[1] + next_tokens.shape[1], device=model.device ), )["hidden_states"][0] # select random slice random_slice_idx = ids_tensor((1,), output_from_past.shape[-1]).item() output_from_no_past_slice = output_from_no_past[:, -3:, random_slice_idx].detach() output_from_past_slice = output_from_past[:, :, random_slice_idx].detach() self.parent.assertTrue(output_from_past_slice.shape[1] == next_tokens.shape[1]) # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) @require_torch class BambaModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, unittest.TestCase): model_tester_class = BambaModelTester all_model_classes = (BambaModel, BambaForCausalLM) if is_torch_available() else () pipeline_model_mapping = ( { "feature-extraction": BambaModel, "text-generation": BambaForCausalLM, } if is_torch_available() else {} ) test_headmasking = False test_pruning = False fx_compatible = False # Need to use `0.8` instead of `0.9` for `test_cpu_offload` # This is because we are hitting edge cases with the causal_mask buffer model_split_percents = [0.5, 0.7, 0.8] def _check_past_key_values_for_generate(self, batch_size, decoder_past_key_values, cache_length, config): self.assertIsInstance(decoder_past_key_values, HybridMambaAttentionDynamicCache) # (batch, head, seq_length, head_features) expected_shape = ( batch_size, config.num_key_value_heads if hasattr(config, "num_key_value_heads") else config.num_attention_heads, cache_length, config.hidden_size // config.num_attention_heads, ) self.assertListEqual( [key_tensor.shape for key_tensor in decoder_past_key_values.key_cache], [expected_shape] * len(decoder_past_key_values.key_cache), ) self.assertListEqual( [value_cache.shape for value_cache in decoder_past_key_values.value_cache], [expected_shape] * len(decoder_past_key_values.value_cache), ) def setUp(self): self.model_tester = self.model_tester_class(self) self.config_tester = ConfigTester(self, config_class=self.model_tester.config_class, hidden_size=64) def test_config(self): self.config_tester.run_common_tests() def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_for_casual_lm(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_causal_lm(*config_and_inputs) def test_decoder_model_past_with_large_inputs(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_decoder_model_past_large_inputs(*config_and_inputs) def test_initialization(self): r""" Overriding the test_initialization test as the A_log and D params of the Bamba mixer are initialized differently """ config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() configs_no_init = _config_zero_init(config) for model_class in self.all_model_classes: model = model_class(config=configs_no_init) for name, param in model.named_parameters(): if param.requires_grad: if "A_log" in name: A = torch.arange(1, config.mamba_n_heads + 1, dtype=torch.float32) torch.testing.assert_close(param.data, torch.log(A), rtol=1e-5, atol=1e-5) elif "D" in name: D = torch.ones(config.mamba_n_heads, dtype=torch.float32) torch.testing.assert_close(param.data, D, rtol=1e-5, atol=1e-5) else: self.assertIn( ((param.data.mean() * 1e9).round() / 1e9).item(), [0.0, 1.0], msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) def test_mismatched_shapes_have_properly_initialized_weights(self): r""" Overriding the test_mismatched_shapes_have_properly_initialized_weights test because A_log and D params of the Bamba mixer are initialized differently and we tested that in test_initialization """ self.skipTest(reason="Cumbersome and redundant for Bamba") def test_attention_outputs(self): r""" Overriding the test_attention_outputs test as the Bamba model outputs attention only for its attention layers """ config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True seq_len = getattr(self.model_tester, "seq_length", None) encoder_seq_length = getattr(self.model_tester, "encoder_seq_length", seq_len) encoder_key_length = getattr(self.model_tester, "key_length", encoder_seq_length) expected_num_attentions = self.model_tester.num_hidden_layers - len(self.model_tester.attn_layer_indices) for model_class in self.all_model_classes: inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = False config.return_dict = True model = model_class._from_config(config, attn_implementation="eager") config = model.config model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.attentions self.assertEqual(len(attentions), expected_num_attentions) # check that output_attentions also work using config del inputs_dict["output_attentions"] config.output_attentions = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.attentions self.assertEqual(len(attentions), expected_num_attentions) self.assertListEqual( list(attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, encoder_seq_length, encoder_key_length], ) out_len = len(outputs) # Check attention is always last and order is fine inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) added_hidden_states = 1 self.assertEqual(out_len + added_hidden_states, len(outputs)) self_attentions = outputs.attentions self.assertEqual(len(self_attentions), expected_num_attentions) self.assertListEqual( list(self_attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, encoder_seq_length, encoder_key_length], ) def test_batching_equivalence(self): # need to disable the tril input mask orig = self.model_tester.use_input_mask self.model_tester.use_input_mask = False super().test_batching_equivalence() self.model_tester.use_input_mask = orig # essentially the same test in test_utils, just adjustment for rtol for this model @pytest.mark.generate def test_left_padding_compatibility(self): # NOTE: left-padding results in small numerical differences. This is expected. # See https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535 # First, filter out models that don't support left padding # - The model must have generative capabilities if len(self.all_generative_model_classes) == 0: self.skipTest(reason="No generative architecture available for this model.") # - The model must support padding if not self.has_attentions: self.skipTest(reason="This model doesn't support padding.") # - The model must be a decoder-only architecture (encoder-based architectures use right-padding) decoder_only_classes = [] for model_class in self.all_generative_model_classes: config, _ = self.prepare_config_and_inputs_for_generate() if config.is_encoder_decoder: continue else: decoder_only_classes.append(model_class) if len(decoder_only_classes) == 0: self.skipTest(reason="No decoder-only architecture available for this model.") # - Decoder-only architectures derived from encoder-decoder models could support it in theory, but we haven't # added support for it yet. We skip these models for now. has_encoder_attributes = any( attr_name for attr_name in config.to_dict() if attr_name.startswith("encoder") and attr_name != "encoder_no_repeat_ngram_size" ) if has_encoder_attributes: self.skipTest( reason="The decoder-only derived from encoder-decoder models are not expected to support left-padding." ) # Then, test left-padding def _prepare_model_kwargs(input_ids, attention_mask, signature): model_kwargs = {"input_ids": input_ids, "attention_mask": attention_mask} if "position_ids" in signature: position_ids = torch.cumsum(attention_mask, dim=-1) - 1 position_ids.masked_fill_(attention_mask == 0, 1) model_kwargs["position_ids"] = position_ids if "cache_position" in signature: cache_position = torch.arange(input_ids.shape[-1], device=torch_device) model_kwargs["cache_position"] = cache_position return model_kwargs for model_class in decoder_only_classes: config, inputs_dict = self.prepare_config_and_inputs_for_generate() input_ids = inputs_dict["input_ids"] # - for left padding we absolutely need to use an all ones # attention mask, so we do not use the one in inputs_dict attention_mask = torch.ones_like(input_ids) model = model_class(config).to(torch_device).eval() signature = inspect.signature(model.forward).parameters.keys() # no cache as some models require special cache classes to be init outside forward model.generation_config.use_cache = False # Without padding model_kwargs = _prepare_model_kwargs(input_ids, attention_mask, signature) next_logits_wo_padding = model(**model_kwargs).logits[:, -1, :] # With left-padding (length 32) # can hardcode pad_token to be 0 as we'll do attn masking anyway pad_token_id = ( config.get_text_config().pad_token_id if config.get_text_config().pad_token_id is not None else 0 ) pad_size = (input_ids.shape[0], 32) padding = torch.ones(pad_size, dtype=input_ids.dtype, device=torch_device) * pad_token_id padded_input_ids = torch.cat((padding, input_ids), dim=1) padded_attention_mask = torch.cat((torch.zeros_like(padding), attention_mask), dim=1) model_kwargs = _prepare_model_kwargs(padded_input_ids, padded_attention_mask, signature) next_logits_with_padding = model(**model_kwargs).logits[:, -1, :] # They should result in very similar logits torch.testing.assert_close(next_logits_wo_padding, next_logits_with_padding, rtol=1e-5, atol=1e-5) @unittest.skip( "Bamba requires additionally specifying position_ids, seq_idx, and FlashAttentionKwargs for padding-free training." ) def test_flash_attention_2_padding_matches_padding_free_with_position_ids(self): pass @unittest.skip( "Bamba requires additionally specifying position_ids, seq_idx, and FlashAttentionKwargs for padding-free training." ) def test_flash_attention_2_padding_matches_padding_free_with_position_ids_and_fa_kwargs(self): pass @require_flash_attn @require_torch_gpu @mark.flash_attn_test @slow def test_flash_attention_2_padding_matches_padding_free_with_position_ids_seq_idx_and_fa_kwargs(self): if not self.has_attentions: self.skipTest(reason="Model architecture does not support attentions") max_new_tokens = 30 for model_class in self.all_generative_model_classes: if not model_class._supports_flash_attn: self.skipTest(f"{model_class.__name__} does not support Flash Attention 2") config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() if 0 not in inputs_dict.get("attention_mask", []) or "attention_mask" not in inputs_dict: self.skipTest("Model dummy inputs should contain padding in their attention mask") dummy_input = inputs_dict[model_class.main_input_name] if dummy_input.dtype in [torch.float32, torch.bfloat16]: dummy_input = dummy_input.to(torch.float16) # make sure that all models have enough positions for generation if hasattr(config, "max_position_embeddings"): config.max_position_embeddings = max_new_tokens + dummy_input.shape[1] + 1 model = model_class(config) if "position_ids" not in inspect.signature(model.forward).parameters: self.skipTest("Model does not support position_ids") with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) # ensure left padding, to adapt for some models if 0 in inputs_dict["attention_mask"][:, -1]: inputs_dict["attention_mask"] = inputs_dict["attention_mask"].flip(1) dummy_attention_mask = inputs_dict["attention_mask"] inputs_dict["input_ids"][~dummy_attention_mask.bool()] = config.get_text_config().pad_token_id # Ensure inputs_dict also has labels in it, as their presence/absence can induce # dtype conversions. This also lets us compare losses. labels = inputs_dict["input_ids"].clone() # Mask padding tokens labels[~dummy_attention_mask.bool()] = -100 # Also need to mask the first non-trivial token to match the padding-free batch. first_nonneg_idx = (labels >= 0).int().argmax(dim=1) labels[torch.arange(labels.size(0), device=labels.device), first_nonneg_idx] = -100 inputs_dict["labels"] = labels model = ( model_class.from_pretrained( tmpdirname, dtype=torch.float16, attn_implementation="flash_attention_2", ) .to(torch_device) .eval() ) # flatten features = [ {"input_ids": i[a.bool()].tolist()} for i, a in zip(inputs_dict["input_ids"], inputs_dict["attention_mask"]) ] # add position_ids + fa_kwargs + seq_idx data_collator = DataCollatorWithFlattening( return_tensors="pt", return_seq_idx=True, return_flash_attn_kwargs=True ) batch = data_collator(features) batch_accelerator = {k: t.to(torch_device) if torch.is_tensor(t) else t for k, t in batch.items()} res_padded = model(**inputs_dict) res_padfree = model(**batch_accelerator) logits_padded = res_padded.logits[inputs_dict["attention_mask"].bool()] logits_padfree = res_padfree.logits[0] torch.testing.assert_close(logits_padded.argmax(-1), logits_padfree.argmax(-1), rtol=0, atol=0) # acceptable numerical instability tol = torch.finfo(torch.float16).eps torch.testing.assert_close(logits_padded, logits_padfree, rtol=tol, atol=tol) loss_padded = res_padded.loss loss_padfree = res_padfree.loss torch.testing.assert_close(loss_padded, loss_padfree) @slow @require_torch @require_torch_accelerator class BambaModelIntegrationTest(unittest.TestCase): model = None tokenizer = None # This variable is used to determine which CUDA device are we using for our runners (A10 or T4) # Depending on the hardware we get different logits / generations device_properties: DeviceProperties = (None, None, None) @classmethod def setUpClass(cls): model_id = "ibm-fms/Bamba-9B" cls.model = BambaForCausalLM.from_pretrained(model_id, dtype=torch.bfloat16) cls.tokenizer = AutoTokenizer.from_pretrained(model_id) # feels a bit forced to have to do this for the generation test cls.tokenizer.pad_token_id = cls.model.config.pad_token_id cls.tokenizer.padding_side = "left" cls.device_properties = get_device_properties() def test_simple_generate(self): # fmt: off expectations = Expectations( { ("cuda", 8): "<|begin_of_text|>Hey how are you doing on this lovely evening? I hope you are all having a good time.", ("rocm", 9): "<|begin_of_text|>Hey how are you doing on this lovely evening? I hope you are doing well. I am here", ("xpu", 3): "<|begin_of_text|>Hey how are you doing on this lovely evening? I hope you are all doing well. I am", } ) # fmt: on self.model.to(torch_device) input_ids = self.tokenizer("Hey how are you doing on this lovely evening?", return_tensors="pt")[ "input_ids" ].to(torch_device) out = self.model.generate(input_ids, do_sample=False, max_new_tokens=10) output_sentence = self.tokenizer.decode(out[0, :]) expected = expectations.get_expectation() self.assertEqual(output_sentence, expected) # TODO: there are significant differences in the logits across major cuda versions, which shouldn't exist if self.device_properties[0] == "cuda" and self.device_properties[1] == 8: with torch.no_grad(): logits = self.model(input_ids=input_ids, logits_to_keep=40).logits EXPECTED_LOGITS_NO_GRAD = torch.tensor( [ 149., 142., 146., 142., 143., 144., 142., 145., 142., 146., 144., 146., 147., 147., 148., 145., 147., 145., 145., 145., 145., 144., 144., 144., 144., 145., 147., 146., 144., 144., 148., 147., 148., 147., 147., 147., 146., 146., 148., 148. ], dtype=torch.bfloat16) # fmt: skip torch.testing.assert_close(logits[0, -1, :40].cpu(), EXPECTED_LOGITS_NO_GRAD, rtol=1e-3, atol=1) @require_deterministic_for_xpu def test_simple_batched_generate_with_padding(self): # Key 9 for MI300, Key 8 for A100/A10, and Key 7 for T4. # # Note: Key 9 is currently set for MI300, but may need potential future adjustments for H100s, # considering differences in hardware processing and potential deviations in generated text. # fmt: off EXPECTED_TEXTS = Expectations( { ("cuda", 7): [], ("cuda", 8): [ "<|begin_of_text|>Hey how are you doing on this lovely evening? I hope you are doing well. I am here", "!!!<|begin_of_text|>I am late! I need to get to work! I have to get to the", ], ("rocm", 9): [ "<|begin_of_text|>Hey how are you doing on this lovely evening? I hope you are doing well. I am here", "!!!<|begin_of_text|>I am late! I need to be at the airport in 20 minutes! I", ], ("xpu", 3): [ "<|begin_of_text|>Hey how are you doing on this lovely evening? I hope you are all doing well. I am", "!!!<|begin_of_text|>I am late! I need to get to work! I have to get to the", ], } ) # fmt: on EXPECTED_TEXT = EXPECTED_TEXTS.get_expectation() self.model.to(torch_device) inputs = self.tokenizer( ["Hey how are you doing on this lovely evening?", "I am late! I need to"], padding=True, return_tensors="pt", ).to(torch_device) out = self.model.generate(**inputs, do_sample=False, max_new_tokens=10) output_sentences = self.tokenizer.batch_decode(out) self.assertEqual(output_sentences[0], EXPECTED_TEXT[0]) self.assertEqual(output_sentences[1], EXPECTED_TEXT[1]) # TODO: there are significant differences in the logits across major cuda versions, which shouldn't exist if self.device_properties[0] == "cuda" and self.device_properties[1] == 8: with torch.no_grad(): logits = self.model(input_ids=inputs["input_ids"]).logits EXPECTED_LOGITS_NO_GRAD_0 = torch.tensor( [ 149., 142., 146., 142., 143., 144., 142., 145., 142., 146., 144., 146., 147., 147., 148., 145., 147., 145., 145., 145., 145., 144., 144., 144., 144., 145., 147., 146., 144., 144., 148., 147., 148., 147., 147., 147., 146., 146., 148., 148. ], dtype=torch.bfloat16) # fmt: skip EXPECTED_LOGITS_NO_GRAD_1 = torch.tensor( [ 182., 178., 177., 174., 176., 176., 178., 178., 177., 179., 176., 183., 180., 182., 179., 174., 178., 176., 176., 175., 175., 175., 174., 173., 174., 182., 180., 176., 177., 177., 180., 176., 178., 177., 177., 175., 176., 177., 175., 177. ], dtype=torch.bfloat16) # fmt: skip torch.testing.assert_close(logits[0, -1, :40].cpu(), EXPECTED_LOGITS_NO_GRAD_0, rtol=1e-3, atol=1) torch.testing.assert_close(logits[1, -1, :40].cpu(), EXPECTED_LOGITS_NO_GRAD_1, rtol=1e-3, atol=1)
transformers/tests/models/bamba/test_modeling_bamba.py/0
{ "file_path": "transformers/tests/models/bamba/test_modeling_bamba.py", "repo_id": "transformers", "token_count": 14855 }
571
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import unittest from transformers import BertTokenizerFast from transformers.models.bert.tokenization_bert import ( VOCAB_FILES_NAMES, BasicTokenizer, BertTokenizer, WordpieceTokenizer, _is_control, _is_punctuation, _is_whitespace, ) from transformers.testing_utils import require_tokenizers, slow from ...test_tokenization_common import TokenizerTesterMixin, filter_non_english @require_tokenizers class BertTokenizationTest(TokenizerTesterMixin, unittest.TestCase): from_pretrained_id = "google-bert/bert-base-uncased" tokenizer_class = BertTokenizer rust_tokenizer_class = BertTokenizerFast test_rust_tokenizer = True space_between_special_tokens = True from_pretrained_filter = filter_non_english @classmethod def setUpClass(cls): super().setUpClass() vocab_tokens = [ "[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]", "want", "##want", "##ed", "wa", "un", "runn", "##ing", ",", "low", "lowest", ] cls.vocab_file = os.path.join(cls.tmpdirname, VOCAB_FILES_NAMES["vocab_file"]) with open(cls.vocab_file, "w", encoding="utf-8") as vocab_writer: vocab_writer.write("".join([x + "\n" for x in vocab_tokens])) def get_input_output_texts(self, tokenizer): input_text = "UNwant\u00e9d,running" output_text = "unwanted, running" return input_text, output_text def test_full_tokenizer(self): tokenizer = self.tokenizer_class(self.vocab_file) tokens = tokenizer.tokenize("UNwant\u00e9d,running") self.assertListEqual(tokens, ["un", "##want", "##ed", ",", "runn", "##ing"]) self.assertListEqual(tokenizer.convert_tokens_to_ids(tokens), [9, 6, 7, 12, 10, 11]) def test_rust_and_python_full_tokenizers(self): if not self.test_rust_tokenizer: self.skipTest(reason="test_rust_tokenizer is set to False") tokenizer = self.get_tokenizer() rust_tokenizer = self.get_rust_tokenizer() sequence = "UNwant\u00e9d,running" tokens = tokenizer.tokenize(sequence) rust_tokens = rust_tokenizer.tokenize(sequence) self.assertListEqual(tokens, rust_tokens) ids = tokenizer.encode(sequence, add_special_tokens=False) rust_ids = rust_tokenizer.encode(sequence, add_special_tokens=False) self.assertListEqual(ids, rust_ids) rust_tokenizer = self.get_rust_tokenizer() ids = tokenizer.encode(sequence) rust_ids = rust_tokenizer.encode(sequence) self.assertListEqual(ids, rust_ids) # With lower casing tokenizer = self.get_tokenizer(do_lower_case=True) rust_tokenizer = self.get_rust_tokenizer(do_lower_case=True) sequence = "UNwant\u00e9d,running" tokens = tokenizer.tokenize(sequence) rust_tokens = rust_tokenizer.tokenize(sequence) self.assertListEqual(tokens, rust_tokens) ids = tokenizer.encode(sequence, add_special_tokens=False) rust_ids = rust_tokenizer.encode(sequence, add_special_tokens=False) self.assertListEqual(ids, rust_ids) rust_tokenizer = self.get_rust_tokenizer() ids = tokenizer.encode(sequence) rust_ids = rust_tokenizer.encode(sequence) self.assertListEqual(ids, rust_ids) def test_chinese(self): tokenizer = BasicTokenizer() self.assertListEqual(tokenizer.tokenize("ah\u535a\u63a8zz"), ["ah", "\u535a", "\u63a8", "zz"]) def test_basic_tokenizer_lower(self): tokenizer = BasicTokenizer(do_lower_case=True) self.assertListEqual( tokenizer.tokenize(" \tHeLLo!how \n Are yoU? "), ["hello", "!", "how", "are", "you", "?"] ) self.assertListEqual(tokenizer.tokenize("H\u00e9llo"), ["hello"]) def test_basic_tokenizer_lower_strip_accents_false(self): tokenizer = BasicTokenizer(do_lower_case=True, strip_accents=False) self.assertListEqual( tokenizer.tokenize(" \tHäLLo!how \n Are yoU? "), ["hällo", "!", "how", "are", "you", "?"] ) self.assertListEqual(tokenizer.tokenize("H\u00e9llo"), ["h\u00e9llo"]) def test_basic_tokenizer_lower_strip_accents_true(self): tokenizer = BasicTokenizer(do_lower_case=True, strip_accents=True) self.assertListEqual( tokenizer.tokenize(" \tHäLLo!how \n Are yoU? "), ["hallo", "!", "how", "are", "you", "?"] ) self.assertListEqual(tokenizer.tokenize("H\u00e9llo"), ["hello"]) def test_basic_tokenizer_lower_strip_accents_default(self): tokenizer = BasicTokenizer(do_lower_case=True) self.assertListEqual( tokenizer.tokenize(" \tHäLLo!how \n Are yoU? "), ["hallo", "!", "how", "are", "you", "?"] ) self.assertListEqual(tokenizer.tokenize("H\u00e9llo"), ["hello"]) def test_basic_tokenizer_no_lower(self): tokenizer = BasicTokenizer(do_lower_case=False) self.assertListEqual( tokenizer.tokenize(" \tHeLLo!how \n Are yoU? "), ["HeLLo", "!", "how", "Are", "yoU", "?"] ) def test_basic_tokenizer_no_lower_strip_accents_false(self): tokenizer = BasicTokenizer(do_lower_case=False, strip_accents=False) self.assertListEqual( tokenizer.tokenize(" \tHäLLo!how \n Are yoU? "), ["HäLLo", "!", "how", "Are", "yoU", "?"] ) def test_basic_tokenizer_no_lower_strip_accents_true(self): tokenizer = BasicTokenizer(do_lower_case=False, strip_accents=True) self.assertListEqual( tokenizer.tokenize(" \tHäLLo!how \n Are yoU? "), ["HaLLo", "!", "how", "Are", "yoU", "?"] ) def test_basic_tokenizer_respects_never_split_tokens(self): tokenizer = BasicTokenizer(do_lower_case=False, never_split=["[UNK]"]) self.assertListEqual( tokenizer.tokenize(" \tHeLLo!how \n Are yoU? [UNK]"), ["HeLLo", "!", "how", "Are", "yoU", "?", "[UNK]"] ) def test_basic_tokenizer_splits_on_punctuation(self): tokenizer = BasicTokenizer() text = "a\n'll !!to?'d of, can't." expected = ["a", "'", "ll", "!", "!", "to", "?", "'", "d", "of", ",", "can", "'", "t", "."] self.assertListEqual(tokenizer.tokenize(text), expected) def test_wordpiece_tokenizer(self): vocab_tokens = ["[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn", "##ing"] vocab = {} for i, token in enumerate(vocab_tokens): vocab[token] = i tokenizer = WordpieceTokenizer(vocab=vocab, unk_token="[UNK]") self.assertListEqual(tokenizer.tokenize(""), []) self.assertListEqual(tokenizer.tokenize("unwanted running"), ["un", "##want", "##ed", "runn", "##ing"]) self.assertListEqual(tokenizer.tokenize("unwantedX running"), ["[UNK]", "runn", "##ing"]) def test_is_whitespace(self): self.assertTrue(_is_whitespace(" ")) self.assertTrue(_is_whitespace("\t")) self.assertTrue(_is_whitespace("\r")) self.assertTrue(_is_whitespace("\n")) self.assertTrue(_is_whitespace("\u00a0")) self.assertFalse(_is_whitespace("A")) self.assertFalse(_is_whitespace("-")) def test_is_control(self): self.assertTrue(_is_control("\u0005")) self.assertFalse(_is_control("A")) self.assertFalse(_is_control(" ")) self.assertFalse(_is_control("\t")) self.assertFalse(_is_control("\r")) def test_is_punctuation(self): self.assertTrue(_is_punctuation("-")) self.assertTrue(_is_punctuation("$")) self.assertTrue(_is_punctuation("`")) self.assertTrue(_is_punctuation(".")) self.assertFalse(_is_punctuation("A")) self.assertFalse(_is_punctuation(" ")) def test_clean_text(self): tokenizer = self.get_tokenizer() rust_tokenizer = self.get_rust_tokenizer() # Example taken from the issue https://github.com/huggingface/tokenizers/issues/340 self.assertListEqual([tokenizer.tokenize(t) for t in ["Test", "\xad", "test"]], [["[UNK]"], [], ["[UNK]"]]) self.assertListEqual( [rust_tokenizer.tokenize(t) for t in ["Test", "\xad", "test"]], [["[UNK]"], [], ["[UNK]"]] ) @slow def test_sequence_builders(self): tokenizer = self.tokenizer_class.from_pretrained("google-bert/bert-base-uncased") text = tokenizer.encode("sequence builders", add_special_tokens=False) text_2 = tokenizer.encode("multi-sequence build", add_special_tokens=False) encoded_sentence = tokenizer.build_inputs_with_special_tokens(text) encoded_pair = tokenizer.build_inputs_with_special_tokens(text, text_2) assert encoded_sentence == [101] + text + [102] assert encoded_pair == [101] + text + [102] + text_2 + [102] def test_offsets_with_special_characters(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest(f"{tokenizer.__class__.__name__} ({pretrained_name})"): tokenizer_r = self.get_rust_tokenizer(pretrained_name, **kwargs) sentence = f"A, naïve {tokenizer_r.mask_token} AllenNLP sentence." tokens = tokenizer_r.encode_plus( sentence, return_attention_mask=False, return_token_type_ids=False, return_offsets_mapping=True, add_special_tokens=True, ) do_lower_case = tokenizer_r.do_lower_case if hasattr(tokenizer_r, "do_lower_case") else False expected_results = ( [ ((0, 0), tokenizer_r.cls_token), ((0, 1), "A"), ((1, 2), ","), ((3, 5), "na"), ((5, 6), "##ï"), ((6, 8), "##ve"), ((9, 15), tokenizer_r.mask_token), ((16, 21), "Allen"), ((21, 23), "##NL"), ((23, 24), "##P"), ((25, 33), "sentence"), ((33, 34), "."), ((0, 0), tokenizer_r.sep_token), ] if not do_lower_case else [ ((0, 0), tokenizer_r.cls_token), ((0, 1), "a"), ((1, 2), ","), ((3, 8), "naive"), ((9, 15), tokenizer_r.mask_token), ((16, 21), "allen"), ((21, 23), "##nl"), ((23, 24), "##p"), ((25, 33), "sentence"), ((33, 34), "."), ((0, 0), tokenizer_r.sep_token), ] ) self.assertEqual( [e[1] for e in expected_results], tokenizer_r.convert_ids_to_tokens(tokens["input_ids"]) ) self.assertEqual([e[0] for e in expected_results], tokens["offset_mapping"]) def test_change_tokenize_chinese_chars(self): list_of_commun_chinese_char = ["的", "人", "有"] text_with_chinese_char = "".join(list_of_commun_chinese_char) for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest(f"{tokenizer.__class__.__name__} ({pretrained_name})"): kwargs["tokenize_chinese_chars"] = True tokenizer_p = self.get_tokenizer(pretrained_name, **kwargs) tokenizer_r = self.get_rust_tokenizer(pretrained_name, **kwargs) ids_without_spe_char_p = tokenizer_p.encode(text_with_chinese_char, add_special_tokens=False) ids_without_spe_char_r = tokenizer_r.encode(text_with_chinese_char, add_special_tokens=False) tokens_without_spe_char_r = tokenizer_r.convert_ids_to_tokens(ids_without_spe_char_r) tokens_without_spe_char_p = tokenizer_p.convert_ids_to_tokens(ids_without_spe_char_p) # it is expected that each Chinese character is not preceded by "##" self.assertListEqual(tokens_without_spe_char_p, list_of_commun_chinese_char) self.assertListEqual(tokens_without_spe_char_r, list_of_commun_chinese_char) kwargs["tokenize_chinese_chars"] = False tokenizer_r = self.get_rust_tokenizer(pretrained_name, **kwargs) tokenizer_p = self.get_tokenizer(pretrained_name, **kwargs) ids_without_spe_char_r = tokenizer_r.encode(text_with_chinese_char, add_special_tokens=False) ids_without_spe_char_p = tokenizer_p.encode(text_with_chinese_char, add_special_tokens=False) tokens_without_spe_char_r = tokenizer_r.convert_ids_to_tokens(ids_without_spe_char_r) tokens_without_spe_char_p = tokenizer_p.convert_ids_to_tokens(ids_without_spe_char_p) # it is expected that only the first Chinese character is not preceded by "##". expected_tokens = [ f"##{token}" if idx != 0 else token for idx, token in enumerate(list_of_commun_chinese_char) ] self.assertListEqual(tokens_without_spe_char_p, expected_tokens) self.assertListEqual(tokens_without_spe_char_r, expected_tokens)
transformers/tests/models/bert/test_tokenization_bert.py/0
{ "file_path": "transformers/tests/models/bert/test_tokenization_bert.py", "repo_id": "transformers", "token_count": 6906 }
572
# Copyright 2018 HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import tempfile import unittest from tempfile import TemporaryDirectory from transformers import AddedToken, CamembertTokenizer, CamembertTokenizerFast from transformers.testing_utils import get_tests_dir, require_sentencepiece, require_tokenizers, slow from transformers.utils import is_torch_available from ...test_tokenization_common import TokenizerTesterMixin SAMPLE_VOCAB = get_tests_dir("fixtures/test_sentencepiece.model") SAMPLE_BPE_VOCAB = get_tests_dir("fixtures/test_sentencepiece_bpe.model") FRAMEWORK = "pt" if is_torch_available() else "tf" @require_sentencepiece @require_tokenizers class CamembertTokenizationTest(TokenizerTesterMixin, unittest.TestCase): from_pretrained_id = "almanach/camembert-base" tokenizer_class = CamembertTokenizer rust_tokenizer_class = CamembertTokenizerFast test_rust_tokenizer = True test_sentencepiece = True @classmethod def setUpClass(cls): super().setUpClass() # We have a SentencePiece fixture for testing tokenizer = CamembertTokenizer(SAMPLE_VOCAB) tokenizer.save_pretrained(cls.tmpdirname) @unittest.skip( "Token maps are not equal because someone set the probability of ('<unk>NOTUSED', -100), so it's never encoded for fast" ) def test_special_tokens_map_equal(self): return def test_convert_token_and_id(self): """Test ``_convert_token_to_id`` and ``_convert_id_to_token``.""" token = "<pad>" token_id = 1 # 1 is the offset id, but in the spm vocab it's 3 self.assertEqual(self.get_tokenizer().convert_tokens_to_ids(token), token_id) self.assertEqual(self.get_tokenizer().convert_ids_to_tokens(token_id), token) def test_get_vocab(self): vocab_keys = list(self.get_tokenizer().get_vocab().keys()) self.assertEqual(vocab_keys[0], "<s>NOTUSED") self.assertEqual(vocab_keys[1], "<pad>") self.assertEqual(vocab_keys[-1], "<mask>") self.assertEqual(len(vocab_keys), 1_005) def test_vocab_size(self): self.assertEqual(self.get_tokenizer().vocab_size, 1_000) def test_rust_and_python_bpe_tokenizers(self): tokenizer = CamembertTokenizer(SAMPLE_BPE_VOCAB) with TemporaryDirectory() as tmpdirname: tokenizer.save_pretrained(tmpdirname) rust_tokenizer = CamembertTokenizerFast.from_pretrained(tmpdirname) sequence = "I was born in 92000, and this is falsé." ids = tokenizer.encode(sequence) rust_ids = rust_tokenizer.encode(sequence) self.assertListEqual(ids, rust_ids) ids = tokenizer.encode(sequence, add_special_tokens=False) rust_ids = rust_tokenizer.encode(sequence, add_special_tokens=False) self.assertListEqual(ids, rust_ids) # <unk> tokens are not the same for `rust` than for `slow`. # Because spm gives back raw token instead of `unk` in EncodeAsPieces # tokens = tokenizer.tokenize(sequence) tokens = tokenizer.convert_ids_to_tokens(ids) rust_tokens = rust_tokenizer.tokenize(sequence) self.assertListEqual(tokens, rust_tokens) def test_rust_and_python_full_tokenizers(self): if not self.test_rust_tokenizer: self.skipTest(reason="test_rust_tokenizer is set to False") tokenizer = self.get_tokenizer() rust_tokenizer = self.get_rust_tokenizer() sequence = "I was born in 92000, and this is falsé." tokens = tokenizer.tokenize(sequence) rust_tokens = rust_tokenizer.tokenize(sequence) self.assertListEqual(tokens, rust_tokens) ids = tokenizer.encode(sequence, add_special_tokens=False) rust_ids = rust_tokenizer.encode(sequence, add_special_tokens=False) self.assertListEqual(ids, rust_ids) rust_tokenizer = self.get_rust_tokenizer() ids = tokenizer.encode(sequence) rust_ids = rust_tokenizer.encode(sequence) self.assertListEqual(ids, rust_ids) @slow def test_tokenizer_integration(self): expected_encoding = {'input_ids': [[5, 54, 7196, 297, 30, 23, 776, 18, 11, 3215, 3705, 8252, 22, 3164, 1181, 2116, 29, 16, 813, 25, 791, 3314, 20, 3446, 38, 27575, 120, 6, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [5, 468, 17, 11, 9088, 20, 1517, 8, 22804, 18818, 10, 38, 629, 607, 607, 142, 19, 7196, 867, 56, 10326, 24, 2267, 20, 416, 5072, 15612, 233, 734, 7, 2399, 27, 16, 3015, 1649, 7, 24, 20, 4338, 2399, 27, 13, 3400, 14, 13, 6189, 8, 930, 9, 6]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]} # fmt: skip # camembert is a french model. So we also use french texts. sequences = [ "Le transformeur est un modèle d'apprentissage profond introduit en 2017, " "utilisé principalement dans le domaine du traitement automatique des langues (TAL).", "À l'instar des réseaux de neurones récurrents (RNN), les transformeurs sont conçus " "pour gérer des données séquentielles, telles que le langage naturel, pour des tâches " "telles que la traduction et la synthèse de texte.", ] self.tokenizer_integration_test_util( expected_encoding=expected_encoding, model_name="almanach/camembert-base", revision="3a0641d9a1aeb7e848a74299e7e4c4bca216b4cf", sequences=sequences, ) # Overwritten because we have to use from slow (online pretrained is wrong, the tokenizer.json has a whole) def test_added_tokens_serialization(self): self.maxDiff = None # Utility to test the added vocab def _test_added_vocab_and_eos(expected, tokenizer_class, expected_eos, temp_dir): tokenizer = tokenizer_class.from_pretrained(temp_dir) self.assertTrue(str(expected_eos) not in tokenizer.additional_special_tokens) self.assertIn(new_eos, tokenizer.added_tokens_decoder.values()) self.assertEqual(tokenizer.added_tokens_decoder[tokenizer.eos_token_id], new_eos) self.assertTrue(all(item in tokenizer.added_tokens_decoder.items() for item in expected.items())) return tokenizer new_eos = AddedToken("[NEW_EOS]", rstrip=False, lstrip=True, normalized=False, special=True) for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest(f"{tokenizer.__class__.__name__} ({pretrained_name})"): # Load a slow tokenizer from the hub, init with the new token for fast to also include it tokenizer = self.get_tokenizer(pretrained_name, eos_token=new_eos) EXPECTED_ADDED_TOKENS_DECODER = tokenizer.added_tokens_decoder with self.subTest("Hub -> Slow: Test loading a slow tokenizer from the hub)"): self.assertEqual(tokenizer._special_tokens_map["eos_token"], new_eos) self.assertIn(new_eos, list(tokenizer.added_tokens_decoder.values())) with tempfile.TemporaryDirectory() as tmp_dir_2: tokenizer.save_pretrained(tmp_dir_2) with self.subTest( "Hub -> Slow -> Slow: Test saving this slow tokenizer and reloading it in the fast class" ): _test_added_vocab_and_eos( EXPECTED_ADDED_TOKENS_DECODER, self.tokenizer_class, new_eos, tmp_dir_2 ) if self.rust_tokenizer_class is not None: with self.subTest( "Hub -> Slow -> Fast: Test saving this slow tokenizer and reloading it in the fast class" ): tokenizer_fast = _test_added_vocab_and_eos( EXPECTED_ADDED_TOKENS_DECODER, self.rust_tokenizer_class, new_eos, tmp_dir_2 ) with tempfile.TemporaryDirectory() as tmp_dir_3: tokenizer_fast.save_pretrained(tmp_dir_3) with self.subTest( "Hub -> Slow -> Fast -> Fast: Test saving this fast tokenizer and reloading it in the fast class" ): _test_added_vocab_and_eos( EXPECTED_ADDED_TOKENS_DECODER, self.rust_tokenizer_class, new_eos, tmp_dir_3 ) with self.subTest( "Hub -> Slow -> Fast -> Slow: Test saving this slow tokenizer and reloading it in the slow class" ): _test_added_vocab_and_eos( EXPECTED_ADDED_TOKENS_DECODER, self.rust_tokenizer_class, new_eos, tmp_dir_3 ) with self.subTest("Hub -> Fast: Test loading a fast tokenizer from the hub)"): if self.rust_tokenizer_class is not None: tokenizer_fast = self.get_rust_tokenizer(pretrained_name, eos_token=new_eos, from_slow=True) self.assertEqual(tokenizer_fast._special_tokens_map["eos_token"], new_eos) self.assertIn(new_eos, list(tokenizer_fast.added_tokens_decoder.values())) # We can't test the following because for BC we kept the default rstrip lstrip in slow not fast. Will comment once normalization is alright with self.subTest("Hub -> Fast == Hub -> Slow: make sure slow and fast tokenizer match"): with self.subTest("Hub -> Fast == Hub -> Slow: make sure slow and fast tokenizer match"): self.assertTrue( all( item in tokenizer.added_tokens_decoder.items() for item in EXPECTED_ADDED_TOKENS_DECODER.items() ) ) EXPECTED_ADDED_TOKENS_DECODER = tokenizer_fast.added_tokens_decoder with tempfile.TemporaryDirectory() as tmp_dir_4: tokenizer_fast.save_pretrained(tmp_dir_4) with self.subTest("Hub -> Fast -> Fast: saving Fast1 locally and loading"): _test_added_vocab_and_eos( EXPECTED_ADDED_TOKENS_DECODER, self.rust_tokenizer_class, new_eos, tmp_dir_4 ) with self.subTest("Hub -> Fast -> Slow: saving Fast1 locally and loading"): _test_added_vocab_and_eos( EXPECTED_ADDED_TOKENS_DECODER, self.tokenizer_class, new_eos, tmp_dir_4 )
transformers/tests/models/camembert/test_tokenization_camembert.py/0
{ "file_path": "transformers/tests/models/camembert/test_tokenization_camembert.py", "repo_id": "transformers", "token_count": 5741 }
573
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from transformers import CodeGenConfig, is_torch_available from transformers.file_utils import cached_property from transformers.testing_utils import backend_manual_seed, require_torch, slow, torch_device from ...generation.test_utils import GenerationTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, ids_tensor, random_attention_mask from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from transformers import AutoTokenizer, CodeGenForCausalLM, CodeGenModel class CodeGenModelTester: def __init__( self, parent, batch_size=14, seq_length=7, is_training=True, use_token_type_ids=True, use_input_mask=True, use_labels=True, use_mc_token_ids=True, vocab_size=256, hidden_size=32, rotary_dim=4, num_hidden_layers=2, num_attention_heads=4, intermediate_size=37, hidden_act="gelu", hidden_dropout_prob=0.0, attention_probs_dropout_prob=0.0, max_position_embeddings=512, type_vocab_size=16, type_sequence_label_size=2, initializer_range=0.02, num_labels=3, num_choices=4, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_token_type_ids = use_token_type_ids self.use_input_mask = use_input_mask self.use_labels = use_labels self.use_mc_token_ids = use_mc_token_ids self.vocab_size = vocab_size self.hidden_size = hidden_size self.rotary_dim = rotary_dim self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.max_position_embeddings = max_position_embeddings self.type_vocab_size = type_vocab_size self.type_sequence_label_size = type_sequence_label_size self.initializer_range = initializer_range self.num_labels = num_labels self.num_choices = num_choices self.scope = None self.bos_token_id = vocab_size - 1 self.eos_token_id = vocab_size - 1 self.pad_token_id = vocab_size - 1 def get_large_model_config(self): return CodeGenConfig.from_pretrained("Salesforce/codegen-2B-mono") def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) input_mask = None if self.use_input_mask: input_mask = random_attention_mask([self.batch_size, self.seq_length]) token_type_ids = None if self.use_token_type_ids: token_type_ids = ids_tensor([self.batch_size, self.seq_length], self.type_vocab_size) mc_token_ids = None if self.use_mc_token_ids: mc_token_ids = ids_tensor([self.batch_size, self.num_choices], self.seq_length) sequence_labels = None token_labels = None choice_labels = None if self.use_labels: sequence_labels = ids_tensor([self.batch_size], self.type_sequence_label_size) token_labels = ids_tensor([self.batch_size, self.seq_length], self.num_labels) choice_labels = ids_tensor([self.batch_size], self.num_choices) config = self.get_config() head_mask = ids_tensor([self.num_hidden_layers, self.num_attention_heads], 2) return ( config, input_ids, input_mask, head_mask, token_type_ids, mc_token_ids, sequence_labels, token_labels, choice_labels, ) def get_config(self): return CodeGenConfig( vocab_size=self.vocab_size, n_embd=self.hidden_size, n_layer=self.num_hidden_layers, n_head=self.num_attention_heads, intermediate_size=self.intermediate_size, hidden_act=self.hidden_act, hidden_dropout_prob=self.hidden_dropout_prob, attention_probs_dropout_prob=self.attention_probs_dropout_prob, n_positions=self.max_position_embeddings, type_vocab_size=self.type_vocab_size, initializer_range=self.initializer_range, use_cache=True, bos_token_id=self.bos_token_id, eos_token_id=self.eos_token_id, pad_token_id=self.pad_token_id, rotary_dim=self.rotary_dim, ) def create_and_check_codegen_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): model = CodeGenModel(config=config) model.to(torch_device) model.eval() result = model(input_ids, token_type_ids=token_type_ids, head_mask=head_mask) result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) self.parent.assertEqual(len(result.past_key_values), config.n_layer) def create_and_check_codegen_model_past(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): model = CodeGenModel(config=config) model.to(torch_device) model.eval() # first forward pass outputs = model(input_ids, token_type_ids=token_type_ids, use_cache=True) outputs_use_cache_conf = model(input_ids, token_type_ids=token_type_ids) outputs_no_past = model(input_ids, token_type_ids=token_type_ids, use_cache=False) self.parent.assertTrue(len(outputs) == len(outputs_use_cache_conf)) self.parent.assertTrue(len(outputs) == len(outputs_no_past) + 1) output, past = outputs.to_tuple() # create hypothetical next token and extent to next_input_ids next_tokens = ids_tensor((self.batch_size, 1), config.vocab_size) next_token_types = ids_tensor([self.batch_size, 1], self.type_vocab_size) # append to next input_ids and token_type_ids next_input_ids = torch.cat([input_ids, next_tokens], dim=-1) next_token_type_ids = torch.cat([token_type_ids, next_token_types], dim=-1) output_from_no_past = model(next_input_ids, token_type_ids=next_token_type_ids)["last_hidden_state"] output_from_past = model(next_tokens, token_type_ids=next_token_types, past_key_values=past)[ "last_hidden_state" ] # select random slice random_slice_idx = ids_tensor((1,), output_from_past.shape[-1]).item() output_from_no_past_slice = output_from_no_past[:, -1, random_slice_idx].detach() output_from_past_slice = output_from_past[:, 0, random_slice_idx].detach() # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) def create_and_check_codegen_model_attention_mask_past( self, config, input_ids, input_mask, head_mask, token_type_ids, *args ): model = CodeGenModel(config=config) model.to(torch_device) model.eval() # create attention mask attn_mask = torch.ones(input_ids.shape, dtype=torch.long, device=torch_device) half_seq_length = self.seq_length // 2 attn_mask[:, half_seq_length:] = 0 # first forward pass output, past = model(input_ids, attention_mask=attn_mask).to_tuple() # create hypothetical next token and extent to next_input_ids next_tokens = ids_tensor((self.batch_size, 1), config.vocab_size) # change a random masked slice from input_ids random_seq_idx_to_change = ids_tensor((1,), half_seq_length).item() + 1 random_other_next_tokens = ids_tensor((self.batch_size, 1), config.vocab_size).squeeze(-1) input_ids[:, -random_seq_idx_to_change] = random_other_next_tokens # append to next input_ids and attn_mask next_input_ids = torch.cat([input_ids, next_tokens], dim=-1) attn_mask = torch.cat( [attn_mask, torch.ones((attn_mask.shape[0], 1), dtype=torch.long, device=torch_device)], dim=1, ) # get two different outputs output_from_no_past = model(next_input_ids, attention_mask=attn_mask)["last_hidden_state"] output_from_past = model(next_tokens, past_key_values=past, attention_mask=attn_mask)["last_hidden_state"] # select random slice random_slice_idx = ids_tensor((1,), output_from_past.shape[-1]).item() output_from_no_past_slice = output_from_no_past[:, -1, random_slice_idx].detach() output_from_past_slice = output_from_past[:, 0, random_slice_idx].detach() # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) def create_and_check_codegen_model_past_large_inputs( self, config, input_ids, input_mask, head_mask, token_type_ids, *args ): model = CodeGenModel(config=config) model.to(torch_device) model.eval() # first forward pass outputs = model(input_ids, token_type_ids=token_type_ids, attention_mask=input_mask, use_cache=True) output, past = outputs.to_tuple() # create hypothetical next token and extent to next_input_ids next_tokens = ids_tensor((self.batch_size, 3), config.vocab_size) next_token_types = ids_tensor([self.batch_size, 3], self.type_vocab_size) next_mask = ids_tensor((self.batch_size, 3), vocab_size=2) # append to next input_ids and token_type_ids next_input_ids = torch.cat([input_ids, next_tokens], dim=-1) next_token_type_ids = torch.cat([token_type_ids, next_token_types], dim=-1) next_attention_mask = torch.cat([input_mask, next_mask], dim=-1) output_from_no_past = model( next_input_ids, token_type_ids=next_token_type_ids, attention_mask=next_attention_mask )["last_hidden_state"] output_from_past = model( next_tokens, token_type_ids=next_token_types, attention_mask=next_attention_mask, past_key_values=past )["last_hidden_state"] self.parent.assertTrue(output_from_past.shape[1] == next_tokens.shape[1]) # select random slice random_slice_idx = ids_tensor((1,), output_from_past.shape[-1]).item() output_from_no_past_slice = output_from_no_past[:, -3:, random_slice_idx].detach() output_from_past_slice = output_from_past[:, :, random_slice_idx].detach() # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) def create_and_check_lm_head_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): model = CodeGenForCausalLM(config) model.to(torch_device) model.eval() result = model(input_ids, token_type_ids=token_type_ids, labels=input_ids) self.parent.assertEqual(result.loss.shape, ()) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) def create_and_check_forward_and_backwards( self, config, input_ids, input_mask, head_mask, token_type_ids, *args, gradient_checkpointing=False ): model = CodeGenForCausalLM(config) if gradient_checkpointing: model.gradient_checkpointing_enable() model.to(torch_device) result = model(input_ids, token_type_ids=token_type_ids, labels=input_ids) self.parent.assertEqual(result.loss.shape, ()) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) result.loss.backward() def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, input_ids, input_mask, head_mask, token_type_ids, mc_token_ids, sequence_labels, token_labels, choice_labels, ) = config_and_inputs inputs_dict = {"input_ids": input_ids, "token_type_ids": token_type_ids, "head_mask": head_mask} return config, inputs_dict @require_torch class CodeGenModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (CodeGenModel, CodeGenForCausalLM) if is_torch_available() else () pipeline_model_mapping = ( {"feature-extraction": CodeGenModel, "text-generation": CodeGenForCausalLM} if is_torch_available() else {} ) fx_compatible = False test_pruning = False test_missing_keys = False test_model_parallel = False test_head_masking = False # special case for DoubleHeads model def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): inputs_dict = super()._prepare_for_class(inputs_dict, model_class, return_labels=return_labels) return inputs_dict def setUp(self): self.model_tester = CodeGenModelTester(self) self.config_tester = ConfigTester(self, config_class=CodeGenConfig, n_embd=37) def test_config(self): self.config_tester.run_common_tests() def test_codegen_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_codegen_model(*config_and_inputs) def test_codegen_model_past(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_codegen_model_past(*config_and_inputs) def test_codegen_model_att_mask_past(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_codegen_model_attention_mask_past(*config_and_inputs) def test_codegen_model_past_large_inputs(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_codegen_model_past_large_inputs(*config_and_inputs) def test_codegen_lm_head_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_lm_head_model(*config_and_inputs) def test_codegen_gradient_checkpointing(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_forward_and_backwards(*config_and_inputs, gradient_checkpointing=True) @slow def test_batch_generation(self): tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono") model = CodeGenForCausalLM.from_pretrained("Salesforce/codegen-350M-mono") model.to(torch_device) tokenizer.padding_side = "left" # Define PAD Token = EOS Token = 50256 tokenizer.pad_token = tokenizer.eos_token model.config.pad_token_id = model.config.eos_token_id # use different length sentences to test batching sentences = ["def hellow_world():", "def greet(name):"] inputs = tokenizer(sentences, return_tensors="pt", padding=True) input_ids = inputs["input_ids"].to(torch_device) token_type_ids = torch.cat( [ input_ids.new_full((input_ids.shape[0], input_ids.shape[1] - 1), 0), input_ids.new_full((input_ids.shape[0], 1), 500), ], dim=-1, ) outputs = model.generate( input_ids=input_ids, attention_mask=inputs["attention_mask"].to(torch_device), ) outputs_tt = model.generate( input_ids=input_ids, attention_mask=inputs["attention_mask"].to(torch_device), token_type_ids=token_type_ids, ) inputs_non_padded = tokenizer(sentences[0], return_tensors="pt").input_ids.to(torch_device) output_non_padded = model.generate(input_ids=inputs_non_padded) num_paddings = inputs_non_padded.shape[-1] - inputs["attention_mask"][-1].long().sum().item() inputs_padded = tokenizer(sentences[1], return_tensors="pt").input_ids.to(torch_device) output_padded = model.generate(input_ids=inputs_padded, max_length=model.config.max_length - num_paddings) batch_out_sentence = tokenizer.batch_decode(outputs, skip_special_tokens=True) batch_out_sentence_tt = tokenizer.batch_decode(outputs_tt, skip_special_tokens=True) non_padded_sentence = tokenizer.decode(output_non_padded[0], skip_special_tokens=True) padded_sentence = tokenizer.decode(output_padded[0], skip_special_tokens=True) expected_output_sentence = [ 'def hellow_world():\n print("Hello World")\n\nhellow_world()', 'def greet(name):\n print(f"Hello {name}")\n\ng', ] self.assertListEqual(expected_output_sentence, batch_out_sentence) self.assertTrue(batch_out_sentence_tt != batch_out_sentence) # token_type_ids should change output self.assertListEqual(expected_output_sentence, [non_padded_sentence, padded_sentence]) @slow def test_model_from_pretrained(self): model_name = "Salesforce/codegen-350M-nl" model = CodeGenModel.from_pretrained(model_name) self.assertIsNotNone(model) @require_torch class CodeGenModelLanguageGenerationTest(unittest.TestCase): @cached_property def cached_tokenizer(self): return AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono") @cached_property def cached_model(self): return CodeGenForCausalLM.from_pretrained("Salesforce/codegen-350M-mono") @slow def test_lm_generate_codegen(self): tokenizer = self.cached_tokenizer for checkpointing in [True, False]: model = self.cached_model if checkpointing: model.gradient_checkpointing_enable() else: model.gradient_checkpointing_disable() model.to(torch_device) inputs = tokenizer("def hello_world():", return_tensors="pt").to(torch_device) expected_output = 'def hello_world():\n print("Hello World")\n\nhello_world()\n\n' output_ids = model.generate(**inputs, do_sample=False) output_str = tokenizer.batch_decode(output_ids)[0] self.assertEqual(output_str, expected_output) @slow def test_codegen_sample(self): tokenizer = self.cached_tokenizer model = self.cached_model model.to(torch_device) torch.manual_seed(0) backend_manual_seed(torch_device, 0) tokenized = tokenizer("def hello_world():", return_tensors="pt", return_token_type_ids=True) input_ids = tokenized.input_ids.to(torch_device) output_ids = model.generate(input_ids, do_sample=True) output_str = tokenizer.decode(output_ids[0], skip_special_tokens=True) token_type_ids = tokenized.token_type_ids.to(torch_device) output_seq = model.generate(input_ids=input_ids, do_sample=True, num_return_sequences=5) output_seq_tt = model.generate( input_ids=input_ids, token_type_ids=token_type_ids, do_sample=True, num_return_sequences=5 ) output_seq_strs = tokenizer.batch_decode(output_seq, skip_special_tokens=True) output_seq_tt_strs = tokenizer.batch_decode(output_seq_tt, skip_special_tokens=True) if torch_device == "cuda": EXPECTED_OUTPUT_STR = 'def hello_world():\n print("Hello World")\n return True\n\nresult =' else: EXPECTED_OUTPUT_STR = "def hello_world():\r\n print('Hello, World.')\r\n\r\n\r" self.assertEqual(output_str, EXPECTED_OUTPUT_STR) self.assertTrue( all(output_seq_strs[idx] != output_seq_tt_strs[idx] for idx in range(len(output_seq_tt_strs))) ) # token_type_ids should change output
transformers/tests/models/codegen/test_modeling_codegen.py/0
{ "file_path": "transformers/tests/models/codegen/test_modeling_codegen.py", "repo_id": "transformers", "token_count": 9206 }
574
# Copyright 2022 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch Data2VecAudio model.""" import math import unittest import numpy as np from datasets import load_dataset from tests.test_modeling_common import floats_tensor, ids_tensor, random_attention_mask from transformers import Data2VecAudioConfig, is_torch_available from transformers.testing_utils import require_torch, require_torchcodec, slow, torch_device from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, _config_zero_init from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from transformers import ( Data2VecAudioForAudioFrameClassification, Data2VecAudioForCTC, Data2VecAudioForSequenceClassification, Data2VecAudioForXVector, Data2VecAudioModel, Wav2Vec2Processor, ) from transformers.models.data2vec.modeling_data2vec_audio import _compute_mask_indices class Data2VecAudioModelTester: def __init__( self, parent, batch_size=13, seq_length=1024, # speech is longer is_training=False, hidden_size=16, feat_extract_dropout=0.0, feat_extract_activation="gelu", conv_dim=(32, 32, 32), conv_stride=(4, 4, 4), conv_kernel=(8, 8, 8), conv_bias=False, num_conv_pos_embeddings=16, num_conv_pos_embedding_groups=2, num_hidden_layers=2, num_attention_heads=2, hidden_dropout_prob=0.1, intermediate_size=20, layer_norm_eps=1e-5, hidden_act="gelu", initializer_range=0.02, mask_time_prob=0.5, mask_time_length=2, vocab_size=32, num_adapter_layers=1, adapter_stride=2, tdnn_dim=(32, 32), tdnn_kernel=(5, 3), tdnn_dilation=(1, 2), xvector_output_dim=32, scope=None, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.hidden_size = hidden_size self.feat_extract_dropout = feat_extract_dropout self.feat_extract_activation = feat_extract_activation self.conv_dim = conv_dim self.conv_stride = conv_stride self.conv_kernel = conv_kernel self.conv_bias = conv_bias self.num_conv_pos_embeddings = num_conv_pos_embeddings self.num_conv_pos_embedding_groups = num_conv_pos_embedding_groups self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.hidden_dropout_prob = hidden_dropout_prob self.intermediate_size = intermediate_size self.layer_norm_eps = layer_norm_eps self.hidden_act = hidden_act self.initializer_range = initializer_range self.vocab_size = vocab_size self.num_adapter_layers = num_adapter_layers self.adapter_stride = adapter_stride self.mask_time_prob = mask_time_prob self.mask_time_length = mask_time_length self.scope = scope self.tdnn_dim = tdnn_dim self.tdnn_kernel = tdnn_kernel self.tdnn_dilation = tdnn_dilation self.xvector_output_dim = xvector_output_dim output_seq_length = self.seq_length for kernel, stride in zip(self.conv_kernel, self.conv_stride): output_seq_length = (output_seq_length - (kernel - 1)) / stride self.output_seq_length = int(math.ceil(output_seq_length)) self.encoder_seq_length = self.output_seq_length self.adapter_output_seq_length = (self.output_seq_length - 1) // adapter_stride + 1 def prepare_config_and_inputs(self): input_values = floats_tensor([self.batch_size, self.seq_length], scale=1.0) attention_mask = random_attention_mask([self.batch_size, self.seq_length]) config = self.get_config() return config, input_values, attention_mask def get_config(self): return Data2VecAudioConfig( hidden_size=self.hidden_size, feat_extract_dropout=self.feat_extract_dropout, feat_extract_activation=self.feat_extract_activation, conv_dim=self.conv_dim, conv_stride=self.conv_stride, conv_kernel=self.conv_kernel, conv_bias=self.conv_bias, mask_time_prob=self.mask_time_prob, mask_time_length=self.mask_time_length, num_conv_pos_embeddings=self.num_conv_pos_embeddings, num_conv_pos_embedding_groups=self.num_conv_pos_embedding_groups, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, hidden_dropout_prob=self.hidden_dropout_prob, intermediate_size=self.intermediate_size, layer_norm_eps=self.layer_norm_eps, hidden_act=self.hidden_act, initializer_range=self.initializer_range, vocab_size=self.vocab_size, num_adapter_layers=self.num_adapter_layers, adapter_stride=self.adapter_stride, tdnn_dim=self.tdnn_dim, tdnn_kernel=self.tdnn_kernel, tdnn_dilation=self.tdnn_dilation, xvector_output_dim=self.xvector_output_dim, ) def create_and_check_model(self, config, input_values, attention_mask): model = Data2VecAudioModel(config=config) model.to(torch_device) model.eval() result = model(input_values, attention_mask=attention_mask) self.parent.assertEqual( result.last_hidden_state.shape, (self.batch_size, self.output_seq_length, self.hidden_size) ) def create_and_check_model_with_adapter(self, config, input_values, attention_mask): config.add_adapter = True model = Data2VecAudioModel(config=config) model.to(torch_device) model.eval() result = model(input_values, attention_mask=attention_mask) self.parent.assertEqual( result.last_hidden_state.shape, (self.batch_size, self.adapter_output_seq_length, self.hidden_size) ) def create_and_check_model_with_adapter_proj_dim(self, config, input_values, attention_mask): config.add_adapter = True config.output_hidden_size = 8 model = Data2VecAudioModel(config=config) model.to(torch_device) model.eval() result = model(input_values, attention_mask=attention_mask) self.parent.assertEqual( result.last_hidden_state.shape, (self.batch_size, self.adapter_output_seq_length, config.output_hidden_size), ) def check_ctc_loss(self, config, input_values, *args): model = Data2VecAudioForCTC(config=config) model.to(torch_device) # make sure that dropout is disabled model.eval() input_values = input_values[:3] attention_mask = torch.ones(input_values.shape, device=torch_device, dtype=torch.long) input_lengths = [input_values.shape[-1] // i for i in [4, 2, 1]] max_length_labels = model._get_feat_extract_output_lengths(torch.tensor(input_lengths)) labels = ids_tensor((input_values.shape[0], min(max_length_labels) - 1), model.config.vocab_size) # pad input for i in range(len(input_lengths)): input_values[i, input_lengths[i] :] = 0.0 attention_mask[i, input_lengths[i] :] = 0 model.config.ctc_loss_reduction = "sum" sum_loss = model(input_values, attention_mask=attention_mask, labels=labels).loss.item() model.config.ctc_loss_reduction = "mean" mean_loss = model(input_values, attention_mask=attention_mask, labels=labels).loss.item() self.parent.assertTrue(isinstance(sum_loss, float)) self.parent.assertTrue(isinstance(mean_loss, float)) def check_seq_classifier_loss(self, config, input_values, *args): model = Data2VecAudioForSequenceClassification(config=config) model.to(torch_device) # make sure that dropout is disabled model.eval() input_values = input_values[:3] attention_mask = torch.ones(input_values.shape, device=torch_device, dtype=torch.long) input_lengths = [input_values.shape[-1] // i for i in [4, 2, 1]] labels = ids_tensor((input_values.shape[0], 1), len(model.config.id2label)) # pad input for i in range(len(input_lengths)): input_values[i, input_lengths[i] :] = 0.0 attention_mask[i, input_lengths[i] :] = 0 masked_loss = model(input_values, attention_mask=attention_mask, labels=labels).loss.item() unmasked_loss = model(input_values, labels=labels).loss.item() self.parent.assertTrue(isinstance(masked_loss, float)) self.parent.assertTrue(isinstance(unmasked_loss, float)) self.parent.assertTrue(masked_loss != unmasked_loss) def check_ctc_training(self, config, input_values, *args): config.ctc_zero_infinity = True model = Data2VecAudioForCTC(config=config) model.to(torch_device) model.train() # freeze feature encoder model.freeze_feature_encoder() input_values = input_values[:3] input_lengths = [input_values.shape[-1] // i for i in [4, 2, 1]] max_length_labels = model._get_feat_extract_output_lengths(torch.tensor(input_lengths)) labels = ids_tensor((input_values.shape[0], max(max_length_labels) - 2), model.config.vocab_size) # pad input for i in range(len(input_lengths)): input_values[i, input_lengths[i] :] = 0.0 if max_length_labels[i] < labels.shape[-1]: # it's important that we make sure that target lengths are at least # one shorter than logit lengths to prevent -inf labels[i, max_length_labels[i] - 1 :] = -100 loss = model(input_values, labels=labels).loss self.parent.assertFalse(torch.isinf(loss).item()) loss.backward() def check_seq_classifier_training(self, config, input_values, *args): config.ctc_zero_infinity = True model = Data2VecAudioForSequenceClassification(config=config) model.to(torch_device) model.train() # freeze everything but the classification head model.freeze_base_model() input_values = input_values[:3] input_lengths = [input_values.shape[-1] // i for i in [4, 2, 1]] labels = ids_tensor((input_values.shape[0], 1), len(model.config.id2label)) # pad input for i in range(len(input_lengths)): input_values[i, input_lengths[i] :] = 0.0 loss = model(input_values, labels=labels).loss self.parent.assertFalse(torch.isinf(loss).item()) loss.backward() def check_xvector_training(self, config, input_values, *args): config.ctc_zero_infinity = True model = Data2VecAudioForXVector(config=config) model.to(torch_device) model.train() # freeze everything but the classification head model.freeze_base_model() input_values = input_values[:3] input_lengths = [input_values.shape[-1] // i for i in [4, 2, 1]] labels = ids_tensor((input_values.shape[0], 1), len(model.config.id2label)) # pad input for i in range(len(input_lengths)): input_values[i, input_lengths[i] :] = 0.0 loss = model(input_values, labels=labels).loss self.parent.assertFalse(torch.isinf(loss).item()) loss.backward() def check_labels_out_of_vocab(self, config, input_values, *args): model = Data2VecAudioForCTC(config) model.to(torch_device) model.train() input_values = input_values[:3] input_lengths = [input_values.shape[-1] // i for i in [4, 2, 1]] max_length_labels = model._get_feat_extract_output_lengths(torch.tensor(input_lengths)) labels = ids_tensor((input_values.shape[0], max(max_length_labels) - 2), model.config.vocab_size + 100) with self.parent.assertRaises(ValueError): model(input_values, labels=labels) def prepare_config_and_inputs_for_common(self): config, input_values, attention_mask = self.prepare_config_and_inputs() inputs_dict = {"input_values": input_values, "attention_mask": attention_mask} return config, inputs_dict @require_torch class Data2VecAudioModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = ( ( Data2VecAudioForCTC, Data2VecAudioModel, Data2VecAudioForSequenceClassification, Data2VecAudioForAudioFrameClassification, Data2VecAudioForXVector, ) if is_torch_available() else () ) pipeline_model_mapping = ( { "audio-classification": Data2VecAudioForSequenceClassification, "automatic-speech-recognition": Data2VecAudioForCTC, "feature-extraction": Data2VecAudioModel, } if is_torch_available() else {} ) test_pruning = False test_headmasking = False def setUp(self): self.model_tester = Data2VecAudioModelTester(self) self.config_tester = ConfigTester(self, config_class=Data2VecAudioConfig, hidden_size=37) def test_config(self): self.config_tester.run_common_tests() def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_model_with_adapter(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model_with_adapter(*config_and_inputs) def test_model_with_adapter_proj_dim(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model_with_adapter_proj_dim(*config_and_inputs) def test_ctc_loss_inference(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.check_ctc_loss(*config_and_inputs) def test_seq_classifier_loss_inference(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.check_seq_classifier_loss(*config_and_inputs) def test_ctc_train(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.check_ctc_training(*config_and_inputs) def test_seq_classifier_train(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.check_seq_classifier_training(*config_and_inputs) def test_xvector_train(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.check_xvector_training(*config_and_inputs) def test_labels_out_of_vocab(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.check_labels_out_of_vocab(*config_and_inputs) @unittest.skip(reason="Data2VecAudio has no inputs_embeds") def test_inputs_embeds(self): pass @unittest.skip(reason="`input_ids` is renamed to `input_values`") def test_forward_signature(self): pass @unittest.skip(reason="Data2VecAudio has no tokens embeddings") def test_resize_tokens_embeddings(self): pass @unittest.skip(reason="Data2VecAudio has no inputs_embeds") def test_model_get_set_embeddings(self): pass def test_retain_grad_hidden_states_attentions(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.output_hidden_states = True config.output_attentions = True # force eager attention to support output attentions config._attn_implementation = "eager" # no need to test all models as different heads yield the same functionality model_class = self.all_model_classes[0] model = model_class(config) model.to(torch_device) # set layer drop to 0 model.config.layerdrop = 0.0 input_values = inputs_dict["input_values"] input_lengths = torch.tensor( [input_values.shape[1] for _ in range(input_values.shape[0])], dtype=torch.long, device=torch_device ) output_lengths = model._get_feat_extract_output_lengths(input_lengths) labels = ids_tensor((input_values.shape[0], output_lengths[0] - 2), self.model_tester.vocab_size) inputs_dict["attention_mask"] = torch.ones_like(inputs_dict["attention_mask"]) inputs_dict["labels"] = labels outputs = model(**inputs_dict) output = outputs[0] # Encoder-/Decoder-only models hidden_states = outputs.hidden_states[0] attentions = outputs.attentions[0] hidden_states.retain_grad() attentions.retain_grad() output.flatten()[0].backward(retain_graph=True) self.assertIsNotNone(hidden_states.grad) self.assertIsNotNone(attentions.grad) def test_initialization(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() configs_no_init = _config_zero_init(config) for model_class in self.all_model_classes: model = model_class(config=configs_no_init) for name, param in model.named_parameters(): uniform_init_parms = [ "conv.weight", "masked_spec_embed", "codevectors", "quantizer.weight_proj.weight", "project_hid.weight", "project_hid.bias", "project_q.weight", "project_q.bias", "feature_projection.projection.weight", "feature_projection.projection.bias", "objective.weight", ] if param.requires_grad: if any(x in name for x in uniform_init_parms): self.assertTrue( -1.0 <= ((param.data.mean() * 1e9).round() / 1e9).item() <= 1.0, msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) else: self.assertIn( ((param.data.mean() * 1e9).round() / 1e9).item(), [0.0, 1.0], msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) # overwrite from test_modeling_common def _mock_init_weights(self, module): if hasattr(module, "weight") and module.weight is not None: module.weight.data.fill_(3) if hasattr(module, "weight_g") and module.weight_g is not None: module.weight_g.data.fill_(3) if hasattr(module, "weight_v") and module.weight_v is not None: module.weight_v.data.fill_(3) if hasattr(module, "bias") and module.bias is not None: module.bias.data.fill_(3) if hasattr(module, "codevectors") and module.codevectors is not None: module.codevectors.data.fill_(3) if hasattr(module, "masked_spec_embed") and module.masked_spec_embed is not None: module.masked_spec_embed.data.fill_(3) def test_mask_feature_prob_ctc(self): model = Data2VecAudioForCTC.from_pretrained( "hf-internal-testing/tiny-random-data2vec-seq-class", mask_feature_prob=0.2, mask_feature_length=2 ) model.to(torch_device).train() processor = Wav2Vec2Processor.from_pretrained( "hf-internal-testing/tiny-random-wav2vec2", return_attention_mask=True ) batch_duration_in_seconds = [1, 3, 2, 6] input_features = [np.random.random(16_000 * s) for s in batch_duration_in_seconds] batch = processor( input_features, padding=True, sampling_rate=processor.feature_extractor.sampling_rate, return_tensors="pt" ) logits = model( input_values=batch["input_values"].to(torch_device), attention_mask=batch["attention_mask"].to(torch_device), ).logits self.assertEqual(logits.shape, (4, 1498, 32)) def test_mask_time_prob_ctc(self): model = Data2VecAudioForCTC.from_pretrained( "facebook/data2vec-audio-base-960h", mask_time_prob=0.2, mask_time_length=2 ) model.to(torch_device).train() processor = Wav2Vec2Processor.from_pretrained( "hf-internal-testing/tiny-random-wav2vec2", return_attention_mask=True ) batch_duration_in_seconds = [1, 3, 2, 6] input_features = [np.random.random(16_000 * s) for s in batch_duration_in_seconds] batch = processor( input_features, padding=True, sampling_rate=processor.feature_extractor.sampling_rate, return_tensors="pt" ) logits = model( input_values=batch["input_values"].to(torch_device), attention_mask=batch["attention_mask"].to(torch_device), ).logits self.assertEqual(logits.shape, (4, 299, 32)) @unittest.skip(reason="Feed forward chunking is not implemented") def test_feed_forward_chunking(self): pass @slow def test_model_from_pretrained(self): model = Data2VecAudioModel.from_pretrained("facebook/data2vec-audio-base") self.assertIsNotNone(model) @require_torch class Data2VecAudioUtilsTest(unittest.TestCase): def test_compute_mask_indices(self): batch_size = 4 sequence_length = 60 mask_prob = 0.5 mask_length = 1 mask = _compute_mask_indices((batch_size, sequence_length), mask_prob, mask_length) mask = torch.from_numpy(mask).to(torch_device) self.assertListEqual(mask.sum(axis=-1).tolist(), [mask_prob * sequence_length for _ in range(batch_size)]) def test_compute_mask_indices_low_prob(self): # with these settings num_masked_spans=0.5, which means probabilistic rounding # ensures that in 5 out of 10 method calls, num_masked_spans=0, and in # the other 5 out of 10, cases num_masked_spans=1 n_trials = 100 batch_size = 4 sequence_length = 100 mask_prob = 0.05 mask_length = 10 count_dimensions_masked = 0 count_dimensions_not_masked = 0 for _ in range(n_trials): mask = _compute_mask_indices((batch_size, sequence_length), mask_prob, mask_length) mask = torch.from_numpy(mask).to(torch_device) num_masks = torch.sum(mask).item() if num_masks > 0: count_dimensions_masked += 1 else: count_dimensions_not_masked += 1 # as we test for at least 10 masked dimension and at least # 10 non-masked dimension, this test could fail with probability: # P(100 coin flips, at most 9 heads) = 1.66e-18 self.assertGreater(count_dimensions_masked, int(n_trials * 0.1)) self.assertGreater(count_dimensions_not_masked, int(n_trials * 0.1)) def test_compute_mask_indices_overlap(self): batch_size = 4 sequence_length = 80 mask_prob = 0.5 mask_length = 4 mask = _compute_mask_indices((batch_size, sequence_length), mask_prob, mask_length) mask = torch.from_numpy(mask).to(torch_device) # because of overlap mask don't have to add up exactly to `mask_prob * sequence_length`, but have to be smaller or equal for batch_sum in mask.sum(axis=-1): self.assertTrue(int(batch_sum) <= mask_prob * sequence_length) def test_compute_mask_indices_attn_mask_overlap(self): batch_size = 4 sequence_length = 80 mask_prob = 0.5 mask_length = 4 attention_mask = torch.ones((batch_size, sequence_length), dtype=torch.long, device=torch_device) attention_mask[:2, sequence_length // 2 :] = 0 mask = _compute_mask_indices( (batch_size, sequence_length), mask_prob, mask_length, attention_mask=attention_mask ) mask = torch.from_numpy(mask).to(torch_device) for batch_sum in mask.sum(axis=-1): self.assertTrue(int(batch_sum) <= mask_prob * sequence_length) self.assertTrue(mask[:2, sequence_length // 2 :].sum() == 0) def test_compute_mask_indices_short_audio(self): batch_size = 4 sequence_length = 100 mask_prob = 0.05 mask_length = 10 attention_mask = torch.ones((batch_size, sequence_length), dtype=torch.long, device=torch_device) # force one example to be heavily padded attention_mask[0, 5:] = 0 mask = _compute_mask_indices( (batch_size, sequence_length), mask_prob, mask_length, attention_mask=attention_mask, min_masks=2 ) # make sure that non-padded examples cannot be padded self.assertFalse(mask[0][attention_mask[0].to(torch.bool).cpu()].any()) @require_torch @require_torchcodec @slow class Data2VecAudioModelIntegrationTest(unittest.TestCase): def _load_datasamples(self, num_samples): ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") # automatic decoding with librispeech speech_samples = ds.sort("id").filter( lambda x: x["id"] in [f"1272-141231-000{i}" for i in range(num_samples)] )[:num_samples]["audio"] return [x["array"] for x in speech_samples] def _load_superb(self, task, num_samples): ds = load_dataset("anton-l/superb_dummy", task, split="test") return ds[:num_samples] def test_inference_ctc_normal(self): model = Data2VecAudioForCTC.from_pretrained("facebook/data2vec-audio-base-960h") model.to(torch_device) processor = Wav2Vec2Processor.from_pretrained("hf-internal-testing/tiny-random-wav2vec2", do_lower_case=True) input_speech = self._load_datasamples(1) input_values = processor(input_speech, return_tensors="pt").input_values.to(torch_device) with torch.no_grad(): logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_trans = processor.batch_decode(predicted_ids) EXPECTED_TRANSCRIPTIONS = ["a man said to the universe sir i exist"] self.assertListEqual(predicted_trans, EXPECTED_TRANSCRIPTIONS) def test_inference_ctc_batched(self): model = Data2VecAudioForCTC.from_pretrained("facebook/data2vec-audio-base-960h").to(torch_device) processor = Wav2Vec2Processor.from_pretrained("hf-internal-testing/tiny-random-wav2vec2", do_lower_case=True) input_speech = self._load_datasamples(4) inputs = processor(input_speech, return_tensors="pt", padding=True) input_values = inputs.input_values.to(torch_device) with torch.no_grad(): logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_trans = processor.batch_decode(predicted_ids) EXPECTED_TRANSCRIPTIONS = [ "a man said to the universe sir i exist", "sweat covered brion's body trickling into the tight loin cloth that was the only garment he wore", "the cut on his chest still dripping blood the ache of his overstrained eyes even the soaring arena around" " him with thousands of spectators were trivialities not worth thinking about", "his instant of panic was followed by a small sharp blow high on his chest", ] self.assertListEqual(predicted_trans, EXPECTED_TRANSCRIPTIONS)
transformers/tests/models/data2vec/test_modeling_data2vec_audio.py/0
{ "file_path": "transformers/tests/models/data2vec/test_modeling_data2vec_audio.py", "repo_id": "transformers", "token_count": 12708 }
575
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch DeepseekV3 model.""" import unittest import pytest from packaging import version from parameterized import parameterized from transformers import AutoTokenizer, DeepseekV3Config, is_torch_available, set_seed from transformers.testing_utils import ( cleanup, require_read_token, require_torch, require_torch_accelerator, require_torch_gpu, require_torch_large_accelerator, slow, torch_device, ) from ...generation.test_utils import GenerationTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from transformers import ( DeepseekV3ForCausalLM, DeepseekV3ForSequenceClassification, DeepseekV3Model, ) from transformers.models.deepseek_v3.modeling_deepseek_v3 import ( DeepseekV3RotaryEmbedding, ) class DeepseekV3ModelTester: def __init__( self, parent, batch_size=13, seq_length=7, is_training=True, use_input_mask=True, use_token_type_ids=False, use_labels=True, vocab_size=99, hidden_size=32, intermediate_size=37, moe_intermediate_size=12, num_hidden_layers=5, num_attention_heads=4, num_key_value_heads=4, n_shared_experts=1, n_routed_experts=8, routed_scaling_factor=2.5, kv_lora_rank=16, q_lora_rank=32, qk_rope_head_dim=16, v_head_dim=32, qk_nope_head_dim=32, n_group=2, topk_group=1, num_experts_per_tok=8, first_k_dense_replace=2, norm_topk_prob=True, aux_loss_alpha=0.001, hidden_act="silu", max_position_embeddings=512, initializer_range=0.02, attention_probs_dropout_prob=0.1, type_vocab_size=16, type_sequence_label_size=2, num_labels=3, num_choices=4, pad_token_id=0, scope=None, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_input_mask = use_input_mask self.use_token_type_ids = use_token_type_ids self.use_labels = use_labels self.vocab_size = vocab_size self.hidden_size = hidden_size self.intermediate_size = intermediate_size self.moe_intermediate_size = moe_intermediate_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.num_key_value_heads = num_key_value_heads self.n_shared_experts = n_shared_experts self.n_routed_experts = n_routed_experts self.routed_scaling_factor = routed_scaling_factor self.kv_lora_rank = kv_lora_rank self.q_lora_rank = q_lora_rank self.qk_rope_head_dim = qk_rope_head_dim self.v_head_dim = v_head_dim self.qk_nope_head_dim = qk_nope_head_dim self.n_group = n_group self.topk_group = topk_group self.num_experts_per_tok = num_experts_per_tok self.first_k_dense_replace = first_k_dense_replace self.norm_topk_prob = norm_topk_prob self.aux_loss_alpha = aux_loss_alpha self.hidden_act = hidden_act self.max_position_embeddings = max_position_embeddings self.initializer_range = initializer_range self.attention_probs_dropout_prob = attention_probs_dropout_prob self.type_vocab_size = type_vocab_size self.type_sequence_label_size = type_sequence_label_size self.num_labels = num_labels self.num_choices = num_choices self.pad_token_id = pad_token_id self.scope = scope def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) input_mask = None if self.use_input_mask: input_mask = torch.tril(torch.ones_like(input_ids).to(torch_device)) token_type_ids = None if self.use_token_type_ids: token_type_ids = ids_tensor([self.batch_size, self.seq_length], self.type_vocab_size) sequence_labels = None token_labels = None choice_labels = None if self.use_labels: sequence_labels = ids_tensor([self.batch_size], self.type_sequence_label_size) token_labels = ids_tensor([self.batch_size, self.seq_length], self.num_labels) choice_labels = ids_tensor([self.batch_size], self.num_choices) config = self.get_config() return config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels def get_config(self): return DeepseekV3Config( vocab_size=self.vocab_size, hidden_size=self.hidden_size, intermediate_size=self.intermediate_size, moe_intermediate_size=self.moe_intermediate_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, num_key_value_heads=self.num_key_value_heads, n_shared_experts=self.n_shared_experts, n_routed_experts=self.n_routed_experts, routed_scaling_factor=self.routed_scaling_factor, kv_lora_rank=self.kv_lora_rank, q_lora_rank=self.q_lora_rank, qk_rope_head_dim=self.qk_rope_head_dim, v_head_dim=self.v_head_dim, qk_nope_head_dim=self.qk_nope_head_dim, n_group=self.n_group, topk_group=self.topk_group, num_experts_per_tok=self.num_experts_per_tok, first_k_dense_replace=self.first_k_dense_replace, norm_topk_prob=self.norm_topk_prob, aux_loss_alpha=self.aux_loss_alpha, hidden_act=self.hidden_act, max_position_embeddings=self.max_position_embeddings, initializer_range=self.initializer_range, use_cache=True, pad_token_id=self.pad_token_id, attention_dropout=self.attention_probs_dropout_prob, ) def create_and_check_model( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels ): model = DeepseekV3Model(config=config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=input_mask) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, ) = config_and_inputs inputs_dict = {"input_ids": input_ids, "attention_mask": input_mask} return config, inputs_dict @require_torch class DeepseekV3ModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = ( ( DeepseekV3Model, DeepseekV3ForCausalLM, DeepseekV3ForSequenceClassification, ) if is_torch_available() else () ) all_generative_model_classes = (DeepseekV3ForCausalLM,) if is_torch_available() else () pipeline_model_mapping = ( { "feature-extraction": DeepseekV3Model, "text-classification": DeepseekV3ForSequenceClassification, "text-generation": DeepseekV3ForCausalLM, "zero-shot": DeepseekV3ForSequenceClassification, } if is_torch_available() else {} ) test_headmasking = False test_pruning = False fx_compatible = False # Need to use `0.8` instead of `0.9` for `test_cpu_offload` # This is because we are hitting edge cases with the causal_mask buffer model_split_percents = [0.5, 0.7, 0.8] # used in `test_torch_compile_for_training` _torch_compile_train_cls = DeepseekV3ForCausalLM if is_torch_available() else None def setUp(self): self.model_tester = DeepseekV3ModelTester(self) self.config_tester = ConfigTester(self, config_class=DeepseekV3Config, hidden_size=37) @unittest.skip("Failing because of unique cache (HybridCache)") def test_model_outputs_equivalence(self, **kwargs): pass @parameterized.expand([("random",), ("same",)]) @unittest.skip("DeepseekV3 has HybridCache which is not compatible with assisted decoding") def test_assisted_decoding_matches_greedy_search(self, assistant_type): pass @unittest.skip("DeepseekV3 has HybridCache which is not compatible with assisted decoding") def test_prompt_lookup_decoding_matches_greedy_search(self, assistant_type): pass @unittest.skip("DeepseekV3 has HybridCache which is not compatible with assisted decoding") def test_assisted_decoding_sample(self): pass @unittest.skip("DeepseekV3 has HybridCache which is not compatible with dola decoding") def test_dola_decoding_sample(self): pass @unittest.skip("DeepseekV3 has HybridCache and doesn't support continue from past kv") def test_generate_continue_from_past_key_values(self): pass @unittest.skip("DeepseekV3 has HybridCache and doesn't support low_memory generation") def test_beam_search_low_memory(self): pass @unittest.skip("DeepseekV3 has HybridCache and doesn't support contrastive generation") def test_contrastive_generate(self): pass @unittest.skip("DeepseekV3 has HybridCache and doesn't support contrastive generation") def test_contrastive_generate_dict_outputs_use_cache(self): pass @unittest.skip("DeepseekV3 has HybridCache and doesn't support contrastive generation") def test_contrastive_generate_low_memory(self): pass @unittest.skip( "DeepseekV3 has HybridCache and doesn't support StaticCache. Though it could, it shouldn't support." ) def test_generate_continue_from_inputs_embeds(self): pass @unittest.skip("Deepseek-V3 uses MLA so it is not compatible with the standard cache format") def test_beam_search_generate_dict_outputs_use_cache(self): pass @unittest.skip("Deepseek-V3 uses MLA so it is not compatible with the standard cache format") def test_greedy_generate_dict_outputs_use_cache(self): pass @unittest.skip(reason="SDPA can't dispatch on flash due to unsupported head dims") def test_sdpa_can_dispatch_on_flash(self): pass def test_config(self): self.config_tester.run_common_tests() def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_model_various_embeddings(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() for type in ["absolute", "relative_key", "relative_key_query"]: config_and_inputs[0].position_embedding_type = type self.model_tester.create_and_check_model(*config_and_inputs) @parameterized.expand([("yarn",)]) def test_model_rope_scaling_from_config(self, scaling_type): config, _ = self.model_tester.prepare_config_and_inputs_for_common() short_input = ids_tensor([1, 10], config.vocab_size) long_input = ids_tensor([1, int(config.max_position_embeddings * 1.5)], config.vocab_size) set_seed(42) # Fixed seed at init time so the two models get the same random weights original_model = DeepseekV3Model(config) original_model.to(torch_device) original_model.eval() original_short_output = original_model(short_input).last_hidden_state original_long_output = original_model(long_input).last_hidden_state set_seed(42) # Fixed seed at init time so the two models get the same random weights config.rope_scaling = {"type": scaling_type, "factor": 10.0} scaled_model = DeepseekV3Model(config) scaled_model.to(torch_device) scaled_model.eval() scaled_short_output = scaled_model(short_input).last_hidden_state scaled_long_output = scaled_model(long_input).last_hidden_state # Dynamic scaling does not change the RoPE embeddings until it receives an input longer than the original # maximum sequence length, so the outputs for the short input should match. if scaling_type == "dynamic": torch.testing.assert_close(original_short_output, scaled_short_output, rtol=1e-5, atol=1e-5) else: self.assertFalse(torch.allclose(original_short_output, scaled_short_output, atol=1e-5)) # The output should be different for long inputs self.assertFalse(torch.allclose(original_long_output, scaled_long_output, atol=1e-5)) def test_model_rope_scaling(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() scaling_factor = 10 short_input_length = 10 long_input_length = int(config.max_position_embeddings * 1.5) # Inputs x = torch.randn(1, dtype=torch.float32, device=torch_device) # used exlusively to get the dtype and the device position_ids_short = torch.arange(short_input_length, dtype=torch.long, device=torch_device) position_ids_short = position_ids_short.unsqueeze(0) position_ids_long = torch.arange(long_input_length, dtype=torch.long, device=torch_device) position_ids_long = position_ids_long.unsqueeze(0) # Sanity check original RoPE original_rope = DeepseekV3RotaryEmbedding(config=config).to(torch_device) original_cos_short, original_sin_short = original_rope(x, position_ids_short) original_cos_long, original_sin_long = original_rope(x, position_ids_long) torch.testing.assert_close(original_cos_short, original_cos_long[:, :short_input_length, :]) torch.testing.assert_close(original_sin_short, original_sin_long[:, :short_input_length, :]) # Sanity check linear RoPE scaling # New position "x" should match original position with index "x/scaling_factor" config.rope_scaling = {"type": "linear", "factor": scaling_factor} linear_scaling_rope = DeepseekV3RotaryEmbedding(config=config).to(torch_device) linear_cos_short, linear_sin_short = linear_scaling_rope(x, position_ids_short) linear_cos_long, linear_sin_long = linear_scaling_rope(x, position_ids_long) torch.testing.assert_close(linear_cos_short, linear_cos_long[:, :short_input_length, :]) torch.testing.assert_close(linear_sin_short, linear_sin_long[:, :short_input_length, :]) for new_position in range(0, long_input_length, scaling_factor): original_position = int(new_position // scaling_factor) torch.testing.assert_close(linear_cos_long[:, new_position, :], original_cos_long[:, original_position, :]) torch.testing.assert_close(linear_sin_long[:, new_position, :], original_sin_long[:, original_position, :]) # Sanity check Dynamic NTK RoPE scaling # Scaling should only be observed after a long input is fed. We can observe that the frequencies increase # with scaling_factor (or that `inv_freq` decreases) config.rope_scaling = {"type": "dynamic", "factor": scaling_factor} ntk_scaling_rope = DeepseekV3RotaryEmbedding(config=config).to(torch_device) ntk_cos_short, ntk_sin_short = ntk_scaling_rope(x, position_ids_short) ntk_cos_long, ntk_sin_long = ntk_scaling_rope(x, position_ids_long) torch.testing.assert_close(ntk_cos_short, original_cos_short) torch.testing.assert_close(ntk_sin_short, original_sin_short) with self.assertRaises(AssertionError): torch.testing.assert_close(ntk_cos_long, original_cos_long) with self.assertRaises(AssertionError): torch.testing.assert_close(ntk_sin_long, original_sin_long) self.assertTrue((ntk_scaling_rope.inv_freq <= original_rope.inv_freq).all()) # Sanity check Yarn RoPE scaling # Scaling should be over the entire input config.rope_scaling = {"type": "yarn", "factor": scaling_factor} yarn_scaling_rope = DeepseekV3RotaryEmbedding(config=config).to(torch_device) yarn_cos_short, yarn_sin_short = yarn_scaling_rope(x, position_ids_short) yarn_cos_long, yarn_sin_long = yarn_scaling_rope(x, position_ids_long) torch.testing.assert_close(yarn_cos_short, yarn_cos_long[:, :short_input_length, :]) torch.testing.assert_close(yarn_sin_short, yarn_sin_long[:, :short_input_length, :]) with self.assertRaises(AssertionError): torch.testing.assert_close(yarn_cos_short, original_cos_short) with self.assertRaises(AssertionError): torch.testing.assert_close(yarn_sin_short, original_sin_short) with self.assertRaises(AssertionError): torch.testing.assert_close(yarn_cos_long, original_cos_long) with self.assertRaises(AssertionError): torch.testing.assert_close(yarn_sin_long, original_sin_long) def test_past_key_values_format(self): """ Overwriting to pass the expected cache shapes (Deepseek-V3 uses MLA so the cache shapes are non-standard) """ config, inputs = self.model_tester.prepare_config_and_inputs_for_common() batch_size, seq_length = inputs["input_ids"].shape # difference: last dim k_embed_dim = config.qk_nope_head_dim + config.qk_rope_head_dim v_embed_dim = config.v_head_dim self_attention_keys_shape = (batch_size, config.num_key_value_heads, seq_length, k_embed_dim) self_attention_values_shape = (batch_size, config.num_key_value_heads, seq_length, v_embed_dim) # build the full cache shapes num_hidden_layers = config.num_hidden_layers all_cache_shapes = [[self_attention_keys_shape, self_attention_values_shape] for _ in range(num_hidden_layers)] super().test_past_key_values_format(custom_all_cache_shapes=all_cache_shapes) @require_torch_large_accelerator @slow def test_eager_matches_sdpa_generate(self): """ Overwriting the common test as the test is flaky on tiny models """ max_new_tokens = 30 tokenizer = AutoTokenizer.from_pretrained("bzantium/tiny-deepseek-v3") model_sdpa = DeepseekV3ForCausalLM.from_pretrained( "bzantium/tiny-deepseek-v3", dtype=torch.float16, ).to(torch_device) self.assertTrue(model_sdpa.config._attn_implementation == "sdpa") model_eager = DeepseekV3ForCausalLM.from_pretrained( "bzantium/tiny-deepseek-v3", dtype=torch.float16, attn_implementation="eager", ).to(torch_device) self.assertTrue(model_eager.config._attn_implementation == "eager") texts = [ "hi here's a longer context, getting longer and", "Hello this is a very long sentence my friend, very long for real", "Today I am in Paris and", ] for padding_side in ["left", "right"]: tokenizer.padding_side = padding_side tokenizer.pad_token = tokenizer.eos_token inputs = tokenizer(texts, return_tensors="pt", padding=True).to(torch_device) res_eager = model_eager.generate(**inputs, max_new_tokens=max_new_tokens, do_sample=False) res_sdpa = model_sdpa.generate(**inputs, max_new_tokens=max_new_tokens, do_sample=False) with self.subTest(f"{padding_side}"): torch.testing.assert_close( res_eager, res_sdpa, msg=f"\n{tokenizer.batch_decode(res_eager)} \nvs\n{tokenizer.batch_decode(res_sdpa)}", ) @require_torch_gpu def test_flex_attention_with_grads(self): """ Overwriting as the namings/functionality on the attention part are different; for now it's more of a unique model. Original issue is also due to dimensionalities, here specifically due to dims not being a multiple of 2. """ for model_class in self.all_model_classes: config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config._attn_implementation = "flex_attention" # Disable dropout config.attention_dropout = 0.0 # Deepseek 3 specific - manipulate nope and adjust calculated total head dim config.qk_nope_head_dim = 16 config.qk_head_dim = config.qk_nope_head_dim + config.qk_rope_head_dim model = model_class(config).to(device=torch_device) self.assertTrue(model.config._attn_implementation == "flex_attention") # Elaborate workaround for encoder-decoder models as some do not specify their main input dummy_inputs = {model.main_input_name: inputs_dict[model.main_input_name].to(torch_device)} if config.is_encoder_decoder: dummy_inputs["decoder_input_ids"] = inputs_dict["decoder_input_ids"].to(torch_device) dummy_inputs["decoder_attention_mask"] = inputs_dict["decoder_attention_mask"].to(torch_device) # If this does not raise an error, the test passes (see https://github.com/huggingface/transformers/pull/35605) _ = model(**dummy_inputs) def test_deepseek_v3_sequence_classification_model(self): config, input_dict = self.model_tester.prepare_config_and_inputs_for_common() config.num_labels = 3 input_ids = input_dict["input_ids"] attention_mask = input_ids.ne(1).to(torch_device) sequence_labels = ids_tensor([self.model_tester.batch_size], self.model_tester.num_labels) model = DeepseekV3ForSequenceClassification(config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=attention_mask, labels=sequence_labels) self.assertEqual(result.logits.shape, (self.model_tester.batch_size, self.model_tester.num_labels)) @require_torch_accelerator class DeepseekV3IntegrationTest(unittest.TestCase): def tearDown(self): # See LlamaIntegrationTest.tearDown(). Can be removed once LlamaIntegrationTest.tearDown() is removed. cleanup(torch_device, gc_collect=False) @slow @require_torch_accelerator @pytest.mark.torch_compile_test @require_read_token def test_compile_static_cache(self): # `torch==2.2` will throw an error on this test (as in other compilation tests), but torch==2.1.2 and torch>2.2 # work as intended. See https://github.com/pytorch/pytorch/issues/121943 if version.parse(torch.__version__) < version.parse("2.3.0"): self.skipTest(reason="This test requires torch >= 2.3 to run.") NUM_TOKENS_TO_GENERATE = 40 # https://github.com/huggingface/transformers/pull/38562#issuecomment-2939209171 # The reason why the output is gibberish is because the testing model bzantium/tiny-deepseek-v3 is not trained # one. Since original DeepSeek-V3 model is too big to debug and test, there was no testing with the original one. EXPECTED_TEXT_COMPLETION = [ "Simply put, the theory of relativity states that Frojekecdytesాలు sicʰtinaccianntuala breej的效率和质量的控制lavestock-PraccuraciesOTTensorialoghismos的思路astiomotivityosexualriad TherapeuticsoldtYPEface Kishsatellite-TV", "My favorite all time favorite condiment is ketchup.ieden沟渠係室温 Fryrok般地Segmentation Cycle/physicalwarenkrautempsాలు蹈梗 Mesomac一等asan lethality suspended Causewaydreamswith Fossilsdorfాలు蹈 ChristiansenHOMEbrew", ] prompts = [ "Simply put, the theory of relativity states that ", "My favorite all time favorite condiment is ketchup.", ] tokenizer = AutoTokenizer.from_pretrained("bzantium/tiny-deepseek-v3", pad_token="</s>", padding_side="right") model = DeepseekV3ForCausalLM.from_pretrained( "bzantium/tiny-deepseek-v3", device_map=torch_device, dtype=torch.float16 ) inputs = tokenizer(prompts, return_tensors="pt", padding=True).to(model.device) # Dynamic Cache generated_ids = model.generate(**inputs, max_new_tokens=NUM_TOKENS_TO_GENERATE, do_sample=False) dynamic_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT_COMPLETION, dynamic_text) # Static Cache generated_ids = model.generate( **inputs, max_new_tokens=NUM_TOKENS_TO_GENERATE, do_sample=False, cache_implementation="static" ) static_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT_COMPLETION, static_text) # Static Cache + compile model._cache = None # clear cache object, initialized when we pass `cache_implementation="static"` model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True) generated_ids = model.generate( **inputs, max_new_tokens=NUM_TOKENS_TO_GENERATE, do_sample=False, cache_implementation="static" ) static_compiled_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT_COMPLETION, static_compiled_text)
transformers/tests/models/deepseek_v3/test_modeling_deepseek_v3.py/0
{ "file_path": "transformers/tests/models/deepseek_v3/test_modeling_deepseek_v3.py", "repo_id": "transformers", "token_count": 11382 }
576
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch Depth Anything model.""" import unittest import pytest from transformers import DepthAnythingConfig, Dinov2Config from transformers.file_utils import is_torch_available, is_vision_available from transformers.pytorch_utils import is_torch_greater_or_equal_than_2_4 from transformers.testing_utils import require_torch, require_vision, slow, torch_device from transformers.utils.import_utils import get_torch_major_and_minor_version from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, floats_tensor, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from transformers import DepthAnythingForDepthEstimation if is_vision_available(): from PIL import Image from transformers import DPTImageProcessor class DepthAnythingModelTester: # Copied from tests.models.dpt.test_modeling_dpt_auto_backbone.DPTModelTester.__init__ def __init__( self, parent, batch_size=2, num_channels=3, image_size=32, patch_size=16, use_labels=True, num_labels=3, is_training=True, hidden_size=4, num_hidden_layers=2, num_attention_heads=2, intermediate_size=8, out_features=["stage1", "stage2"], apply_layernorm=False, reshape_hidden_states=False, neck_hidden_sizes=[2, 2], fusion_hidden_size=6, ): self.parent = parent self.batch_size = batch_size self.num_channels = num_channels self.image_size = image_size self.patch_size = patch_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.out_features = out_features self.apply_layernorm = apply_layernorm self.reshape_hidden_states = reshape_hidden_states self.use_labels = use_labels self.num_labels = num_labels self.is_training = is_training self.neck_hidden_sizes = neck_hidden_sizes self.fusion_hidden_size = fusion_hidden_size # DPT's sequence length self.seq_length = (self.image_size // self.patch_size) ** 2 + 1 # Copied from tests.models.dpt.test_modeling_dpt_auto_backbone.DPTModelTester.prepare_config_and_inputs def prepare_config_and_inputs(self): pixel_values = floats_tensor([self.batch_size, self.num_channels, self.image_size, self.image_size]) labels = None if self.use_labels: labels = ids_tensor([self.batch_size, self.image_size, self.image_size], self.num_labels) config = self.get_config() return config, pixel_values, labels def get_config(self): return DepthAnythingConfig( backbone_config=self.get_backbone_config(), reassemble_hidden_size=self.hidden_size, patch_size=self.patch_size, neck_hidden_sizes=self.neck_hidden_sizes, fusion_hidden_size=self.fusion_hidden_size, ) # Copied from tests.models.dpt.test_modeling_dpt_auto_backbone.DPTModelTester.get_backbone_config def get_backbone_config(self): return Dinov2Config( image_size=self.image_size, patch_size=self.patch_size, num_channels=self.num_channels, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, intermediate_size=self.intermediate_size, is_training=self.is_training, out_features=self.out_features, reshape_hidden_states=self.reshape_hidden_states, ) # Copied from tests.models.dpt.test_modeling_dpt_auto_backbone.DPTModelTester.create_and_check_for_depth_estimation with DPT->DepthAnything def create_and_check_for_depth_estimation(self, config, pixel_values, labels): config.num_labels = self.num_labels model = DepthAnythingForDepthEstimation(config) model.to(torch_device) model.eval() result = model(pixel_values) self.parent.assertEqual(result.predicted_depth.shape, (self.batch_size, self.image_size, self.image_size)) # Copied from tests.models.dpt.test_modeling_dpt_auto_backbone.DPTModelTester.prepare_config_and_inputs_for_common def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, pixel_values, labels = config_and_inputs inputs_dict = {"pixel_values": pixel_values} return config, inputs_dict @require_torch class DepthAnythingModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): """ Here we also overwrite some of the tests of test_modeling_common.py, as Depth Anything does not use input_ids, inputs_embeds, attention_mask and seq_length. """ all_model_classes = (DepthAnythingForDepthEstimation,) if is_torch_available() else () pipeline_model_mapping = {"depth-estimation": DepthAnythingForDepthEstimation} if is_torch_available() else {} test_pruning = False test_resize_embeddings = False test_head_masking = False test_torch_exportable = True test_torch_exportable_strictly = get_torch_major_and_minor_version() != "2.7" def setUp(self): self.model_tester = DepthAnythingModelTester(self) self.config_tester = ConfigTester( self, config_class=DepthAnythingConfig, has_text_modality=False, hidden_size=37, common_properties=["patch_size"], ) def test_config(self): self.config_tester.run_common_tests() @unittest.skip(reason="Depth Anything with AutoBackbone does not have a base model and hence no input_embeddings") def test_inputs_embeds(self): pass def test_for_depth_estimation(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_depth_estimation(*config_and_inputs) @unittest.skip(reason="Depth Anything does not support training yet") def test_training(self): pass @unittest.skip(reason="Depth Anything does not support training yet") def test_training_gradient_checkpointing(self): pass @unittest.skip(reason="Depth Anything with AutoBackbone does not have a base model and hence no input_embeddings") def test_model_get_set_embeddings(self): pass @unittest.skip( reason="This architecture seems to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant(self): pass @unittest.skip( reason="This architecture seems to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant_false(self): pass @slow def test_model_from_pretrained(self): model_name = "LiheYoung/depth-anything-small-hf" model = DepthAnythingForDepthEstimation.from_pretrained(model_name) self.assertIsNotNone(model) def test_backbone_selection(self): def _validate_backbone_init(): for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() # Confirm out_indices propagated to backbone self.assertEqual(len(model.backbone.out_indices), 2) config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() # Load a timm backbone config.backbone = "resnet18" config.use_pretrained_backbone = True config.use_timm_backbone = True config.backbone_config = None # For transformer backbones we can't set the out_indices or just return the features config.backbone_kwargs = {"out_indices": (-2, -1)} _validate_backbone_init() # Load a HF backbone config.backbone = "facebook/dinov2-small" config.use_pretrained_backbone = True config.use_timm_backbone = False config.backbone_config = None config.backbone_kwargs = {"out_indices": [-2, -1]} _validate_backbone_init() # We will verify our results on an image of cute cats def prepare_img(): image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png") return image @require_torch @require_vision @slow class DepthAnythingModelIntegrationTest(unittest.TestCase): def test_inference(self): # -- `relative` depth model -- image_processor = DPTImageProcessor.from_pretrained("LiheYoung/depth-anything-small-hf") model = DepthAnythingForDepthEstimation.from_pretrained("LiheYoung/depth-anything-small-hf").to(torch_device) image = prepare_img() inputs = image_processor(images=image, return_tensors="pt").to(torch_device) # forward pass with torch.no_grad(): outputs = model(**inputs) predicted_depth = outputs.predicted_depth # verify the predicted depth expected_shape = torch.Size([1, 518, 686]) self.assertEqual(predicted_depth.shape, expected_shape) expected_slice = torch.tensor( [[8.8223, 8.6483, 8.6216], [8.3332, 8.6047, 8.7545], [8.6547, 8.6885, 8.7472]], ).to(torch_device) torch.testing.assert_close(predicted_depth[0, :3, :3], expected_slice, rtol=1e-6, atol=1e-6) # -- `metric` depth model -- image_processor = DPTImageProcessor.from_pretrained("depth-anything/depth-anything-V2-metric-indoor-small-hf") model = DepthAnythingForDepthEstimation.from_pretrained( "depth-anything/depth-anything-V2-metric-indoor-small-hf" ).to(torch_device) inputs = image_processor(images=image, return_tensors="pt").to(torch_device) # forward pass with torch.no_grad(): outputs = model(**inputs) predicted_depth = outputs.predicted_depth # verify the predicted depth expected_shape = torch.Size([1, 518, 686]) self.assertEqual(predicted_depth.shape, expected_shape) expected_slice = torch.tensor( [[1.3349, 1.2947, 1.2802], [1.2794, 1.2338, 1.2901], [1.2630, 1.2219, 1.2478]], ).to(torch_device) torch.testing.assert_close(predicted_depth[0, :3, :3], expected_slice, rtol=1e-4, atol=1e-4) @pytest.mark.torch_export_test def test_export(self): for strict in [False, True]: with self.subTest(strict=strict): if strict and get_torch_major_and_minor_version() == "2.7": self.skipTest(reason="`strict=True` is currently failing with torch 2.7.") if not is_torch_greater_or_equal_than_2_4: self.skipTest(reason="This test requires torch >= 2.4 to run.") model = ( DepthAnythingForDepthEstimation.from_pretrained("LiheYoung/depth-anything-small-hf") .to(torch_device) .eval() ) image_processor = DPTImageProcessor.from_pretrained("LiheYoung/depth-anything-small-hf") image = prepare_img() inputs = image_processor(images=image, return_tensors="pt").to(torch_device) exported_program = torch.export.export( model, args=(inputs["pixel_values"],), strict=strict, ) with torch.no_grad(): eager_outputs = model(**inputs) exported_outputs = exported_program.module().forward(inputs["pixel_values"]) self.assertEqual(eager_outputs.predicted_depth.shape, exported_outputs.predicted_depth.shape) self.assertTrue( torch.allclose(eager_outputs.predicted_depth, exported_outputs.predicted_depth, atol=1e-4) )
transformers/tests/models/depth_anything/test_modeling_depth_anything.py/0
{ "file_path": "transformers/tests/models/depth_anything/test_modeling_depth_anything.py", "repo_id": "transformers", "token_count": 5439 }
577
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import unittest from functools import reduce from datasets import load_dataset from transformers.models.efficientloftr import EfficientLoFTRConfig, EfficientLoFTRModel from transformers.testing_utils import ( require_torch, require_vision, set_config_for_less_flaky_test, set_model_for_less_flaky_test, set_model_tester_for_less_flaky_test, slow, torch_device, ) from transformers.utils import cached_property, is_torch_available, is_vision_available from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, floats_tensor if is_torch_available(): import torch from transformers import EfficientLoFTRForKeypointMatching if is_vision_available(): from transformers import AutoImageProcessor class EfficientLoFTRModelTester: def __init__( self, parent, batch_size=2, image_width=80, image_height=60, stage_num_blocks: list[int] = [1, 1, 1], out_features: list[int] = [32, 32, 64], stage_stride: list[int] = [2, 1, 2], q_aggregation_kernel_size: int = 1, kv_aggregation_kernel_size: int = 1, q_aggregation_stride: int = 1, kv_aggregation_stride: int = 1, num_attention_layers: int = 2, num_attention_heads: int = 8, hidden_size: int = 64, coarse_matching_threshold: float = 0.0, fine_kernel_size: int = 2, coarse_matching_border_removal: int = 0, ): self.parent = parent self.batch_size = batch_size self.image_width = image_width self.image_height = image_height self.stage_num_blocks = stage_num_blocks self.out_features = out_features self.stage_stride = stage_stride self.q_aggregation_kernel_size = q_aggregation_kernel_size self.kv_aggregation_kernel_size = kv_aggregation_kernel_size self.q_aggregation_stride = q_aggregation_stride self.kv_aggregation_stride = kv_aggregation_stride self.num_attention_layers = num_attention_layers self.num_attention_heads = num_attention_heads self.hidden_size = hidden_size self.coarse_matching_threshold = coarse_matching_threshold self.coarse_matching_border_removal = coarse_matching_border_removal self.fine_kernel_size = fine_kernel_size def prepare_config_and_inputs(self): # EfficientLoFTR expects a grayscale image as input pixel_values = floats_tensor([self.batch_size, 2, 3, self.image_height, self.image_width]) config = self.get_config() return config, pixel_values def get_config(self): return EfficientLoFTRConfig( stage_num_blocks=self.stage_num_blocks, out_features=self.out_features, stage_stride=self.stage_stride, q_aggregation_kernel_size=self.q_aggregation_kernel_size, kv_aggregation_kernel_size=self.kv_aggregation_kernel_size, q_aggregation_stride=self.q_aggregation_stride, kv_aggregation_stride=self.kv_aggregation_stride, num_attention_layers=self.num_attention_layers, num_attention_heads=self.num_attention_heads, hidden_size=self.hidden_size, coarse_matching_threshold=self.coarse_matching_threshold, coarse_matching_border_removal=self.coarse_matching_border_removal, fine_kernel_size=self.fine_kernel_size, ) def create_and_check_model(self, config, pixel_values): model = EfficientLoFTRForKeypointMatching(config=config) model.to(torch_device) model.eval() result = model(pixel_values) maximum_num_matches = result.matches.shape[-1] self.parent.assertEqual( result.keypoints.shape, (self.batch_size, 2, maximum_num_matches, 2), ) self.parent.assertEqual( result.matches.shape, (self.batch_size, 2, maximum_num_matches), ) self.parent.assertEqual( result.matching_scores.shape, (self.batch_size, 2, maximum_num_matches), ) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, pixel_values = config_and_inputs inputs_dict = {"pixel_values": pixel_values} return config, inputs_dict @require_torch class EfficientLoFTRModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (EfficientLoFTRForKeypointMatching, EfficientLoFTRModel) if is_torch_available() else () test_pruning = False test_resize_embeddings = False test_head_masking = False has_attentions = True def setUp(self): self.model_tester = EfficientLoFTRModelTester(self) self.config_tester = ConfigTester(self, config_class=EfficientLoFTRConfig, has_text_modality=False) def test_config(self): self.config_tester.create_and_test_config_to_json_string() self.config_tester.create_and_test_config_to_json_file() self.config_tester.create_and_test_config_from_and_save_pretrained() self.config_tester.create_and_test_config_with_num_labels() self.config_tester.check_config_can_be_init_without_params() self.config_tester.check_config_arguments_init() @unittest.skip(reason="EfficientLoFTRForKeypointMatching does not use inputs_embeds") def test_inputs_embeds(self): pass @unittest.skip(reason="EfficientLoFTRForKeypointMatching does not support input and output embeddings") def test_model_get_set_embeddings(self): pass @unittest.skip(reason="EfficientLoFTRForKeypointMatching does not use feedforward chunking") def test_feed_forward_chunking(self): pass @unittest.skip(reason="EfficientLoFTRForKeypointMatching is not trainable") def test_training(self): pass @unittest.skip(reason="EfficientLoFTRForKeypointMatching is not trainable") def test_training_gradient_checkpointing(self): pass @unittest.skip(reason="EfficientLoFTRForKeypointMatching is not trainable") def test_training_gradient_checkpointing_use_reentrant(self): pass @unittest.skip(reason="EfficientLoFTRForKeypointMatching is not trainable") def test_training_gradient_checkpointing_use_reentrant_false(self): pass @unittest.skip(reason="EfficientLoFTR does not output any loss term in the forward pass") def test_retain_grad_hidden_states_attentions(self): pass def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] expected_arg_names = ["pixel_values"] self.assertListEqual(arg_names[:1], expected_arg_names) def test_hidden_states_output(self): def check_hidden_states_output(inputs_dict, config, model_class): model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) hidden_states = outputs.hidden_states expected_num_hidden_states = len(self.model_tester.stage_num_blocks) self.assertEqual(len(hidden_states), expected_num_hidden_states) self.assertListEqual( list(hidden_states[0].shape[-2:]), [self.model_tester.image_height, self.model_tester.image_width], ) config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: inputs_dict["output_hidden_states"] = True check_hidden_states_output(inputs_dict, config, model_class) # check that output_hidden_states also work using config del inputs_dict["output_hidden_states"] config.output_hidden_states = True check_hidden_states_output(inputs_dict, config, model_class) def test_attention_outputs(self): def check_attention_output(inputs_dict, config, model_class): config._attn_implementation = "eager" model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.attentions total_stride = reduce(lambda a, b: a * b, config.stage_stride) hidden_size = ( self.model_tester.image_height // total_stride * self.model_tester.image_width // total_stride ) expected_attention_shape = [ self.model_tester.num_attention_heads, hidden_size, hidden_size, ] for i, attention in enumerate(attentions): self.assertListEqual( list(attention.shape[-3:]), expected_attention_shape, ) config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: inputs_dict["output_attentions"] = True check_attention_output(inputs_dict, config, model_class) # check that output_hidden_states also work using config del inputs_dict["output_attentions"] config.output_attentions = True check_attention_output(inputs_dict, config, model_class) @slow def test_model_from_pretrained(self): from_pretrained_ids = ["zju-community/efficientloftr"] for model_name in from_pretrained_ids: model = EfficientLoFTRForKeypointMatching.from_pretrained(model_name) self.assertIsNotNone(model) def test_forward_labels_should_be_none(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): model_inputs = self._prepare_for_class(inputs_dict, model_class) # Provide an arbitrary sized Tensor as labels to model inputs model_inputs["labels"] = torch.rand((128, 128)) with self.assertRaises(ValueError) as cm: model(**model_inputs) self.assertEqual(ValueError, cm.exception.__class__) def test_batching_equivalence(self, atol=1e-5, rtol=1e-5): """ This test is overwritten because the model outputs do not contain only regressive values but also keypoint locations. Similarly to the problem discussed about SuperGlue implementation [here](https://github.com/huggingface/transformers/pull/29886#issuecomment-2482752787), the consequence of having different scores for matching, makes the maximum indices differ. These indices are being used to compute the keypoint coordinates. The keypoint coordinates, in the model outputs, are floating point tensors, so the original implementation of this test cover this case. But the resulting tensors may have differences exceeding the relative and absolute tolerance. Therefore, similarly to SuperGlue integration test, for the key "keypoints" in the model outputs, we check the number of differences in keypoint coordinates being less than a TODO given number """ def recursive_check(batched_object, single_row_object, model_name, key): if isinstance(batched_object, (list, tuple)): for batched_object_value, single_row_object_value in zip(batched_object, single_row_object): recursive_check(batched_object_value, single_row_object_value, model_name, key) elif isinstance(batched_object, dict): for batched_object_value, single_row_object_value in zip( batched_object.values(), single_row_object.values() ): recursive_check(batched_object_value, single_row_object_value, model_name, key) # do not compare returned loss (0-dim tensor) / codebook ids (int) / caching objects elif batched_object is None or not isinstance(batched_object, torch.Tensor): return elif batched_object.dim() == 0: return # do not compare int or bool outputs as they are mostly computed with max/argmax/topk methods which are # very sensitive to the inputs (e.g. tiny differences may give totally different results) elif not torch.is_floating_point(batched_object): return else: # indexing the first element does not always work # e.g. models that output similarity scores of size (N, M) would need to index [0, 0] slice_ids = [slice(0, index) for index in single_row_object.shape] batched_row = batched_object[slice_ids] if key == "keypoints": batched_row = torch.sum(batched_row, dim=-1) single_row_object = torch.sum(single_row_object, dim=-1) tolerance = 0.02 * single_row_object.shape[-1] self.assertTrue( torch.sum(~torch.isclose(batched_row, single_row_object, rtol=rtol, atol=atol)) < tolerance ) else: self.assertFalse( torch.isnan(batched_row).any(), f"Batched output has `nan` in {model_name} for key={key}" ) self.assertFalse( torch.isinf(batched_row).any(), f"Batched output has `inf` in {model_name} for key={key}" ) self.assertFalse( torch.isnan(single_row_object).any(), f"Single row output has `nan` in {model_name} for key={key}", ) self.assertFalse( torch.isinf(single_row_object).any(), f"Single row output has `inf` in {model_name} for key={key}", ) try: torch.testing.assert_close(batched_row, single_row_object, atol=atol, rtol=rtol) except AssertionError as e: msg = f"Batched and Single row outputs are not equal in {model_name} for key={key}.\n\n" msg += str(e) raise AssertionError(msg) set_model_tester_for_less_flaky_test(self) config, batched_input = self.model_tester.prepare_config_and_inputs_for_common() set_config_for_less_flaky_test(config) for model_class in self.all_model_classes: config.output_hidden_states = True model_name = model_class.__name__ if hasattr(self.model_tester, "prepare_config_and_inputs_for_model_class"): config, batched_input = self.model_tester.prepare_config_and_inputs_for_model_class(model_class) batched_input_prepared = self._prepare_for_class(batched_input, model_class) model = model_class(config).to(torch_device).eval() set_model_for_less_flaky_test(model) batch_size = self.model_tester.batch_size single_row_input = {} for key, value in batched_input_prepared.items(): if isinstance(value, torch.Tensor) and value.shape[0] % batch_size == 0: # e.g. musicgen has inputs of size (bs*codebooks). in most cases value.shape[0] == batch_size single_batch_shape = value.shape[0] // batch_size single_row_input[key] = value[:single_batch_shape] else: single_row_input[key] = value with torch.no_grad(): model_batched_output = model(**batched_input_prepared) model_row_output = model(**single_row_input) if isinstance(model_batched_output, torch.Tensor): model_batched_output = {"model_output": model_batched_output} model_row_output = {"model_output": model_row_output} for key in model_batched_output: # DETR starts from zero-init queries to decoder, leading to cos_similarity = `nan` if hasattr(self, "zero_init_hidden_state") and "decoder_hidden_states" in key: model_batched_output[key] = model_batched_output[key][1:] model_row_output[key] = model_row_output[key][1:] recursive_check(model_batched_output[key], model_row_output[key], model_name, key) def prepare_imgs(): dataset = load_dataset("hf-internal-testing/image-matching-test-dataset", split="train") image1 = dataset[0]["image"] image2 = dataset[1]["image"] image3 = dataset[2]["image"] return [[image1, image2], [image3, image2]] @require_torch @require_vision class EfficientLoFTRModelIntegrationTest(unittest.TestCase): @cached_property def default_image_processor(self): return AutoImageProcessor.from_pretrained("zju-community/efficientloftr") if is_vision_available() else None @slow def test_inference(self): model = EfficientLoFTRForKeypointMatching.from_pretrained( "zju-community/efficientloftr", attn_implementation="eager" ).to(torch_device) preprocessor = self.default_image_processor images = prepare_imgs() inputs = preprocessor(images=images, return_tensors="pt").to(torch_device) with torch.no_grad(): outputs = model(**inputs, output_hidden_states=True, output_attentions=True) predicted_top10 = torch.topk(outputs.matching_scores[0, 0], k=10) predicted_top10_matches_indices = predicted_top10.indices predicted_top10_matching_scores = predicted_top10.values expected_number_of_matches = 4800 expected_matches_shape = torch.Size((len(images), 2, expected_number_of_matches)) expected_matching_scores_shape = torch.Size((len(images), 2, expected_number_of_matches)) expected_top10_matches_indices = torch.tensor( [3145, 3065, 3143, 3144, 1397, 1705, 3151, 2422, 3066, 2342], dtype=torch.int64, device=torch_device ) expected_top10_matching_scores = torch.tensor( [0.9998, 0.9997, 0.9997, 0.9996, 0.9996, 0.9996, 0.9996, 0.9995, 0.9995, 0.9995], device=torch_device ) self.assertEqual(outputs.matches.shape, expected_matches_shape) self.assertEqual(outputs.matching_scores.shape, expected_matching_scores_shape) torch.testing.assert_close( predicted_top10_matches_indices, expected_top10_matches_indices, rtol=5e-3, atol=5e-3 ) torch.testing.assert_close( predicted_top10_matching_scores, expected_top10_matching_scores, rtol=5e-3, atol=5e-3 )
transformers/tests/models/efficientloftr/test_modeling_efficientloftr.py/0
{ "file_path": "transformers/tests/models/efficientloftr/test_modeling_efficientloftr.py", "repo_id": "transformers", "token_count": 8953 }
578
# Copyright 2025 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch EoMT Image Processor.""" import unittest import numpy as np import requests from datasets import load_dataset from transformers.testing_utils import require_torch, require_vision from transformers.utils import is_torch_available, is_torchvision_available, is_vision_available from ...test_image_processing_common import ImageProcessingTestMixin, prepare_image_inputs if is_torch_available(): import torch if is_vision_available(): from PIL import Image from transformers import EomtImageProcessor if is_torchvision_available(): from transformers import EomtImageProcessorFast from transformers.models.eomt.modeling_eomt import EomtForUniversalSegmentationOutput class EomtImageProcessingTester: def __init__( self, parent, batch_size=7, num_channels=3, min_resolution=30, max_resolution=400, size=None, do_resize=True, do_pad=True, do_normalize=True, image_mean=[0.5, 0.5, 0.5], image_std=[0.5, 0.5, 0.5], num_labels=10, ): self.parent = parent self.batch_size = batch_size self.num_channels = num_channels self.min_resolution = min_resolution self.max_resolution = max_resolution self.do_resize = do_resize self.do_pad = do_pad self.size = size if size is not None else {"shortest_edge": 18, "longest_edge": 18} self.do_normalize = do_normalize self.image_mean = image_mean self.image_std = image_std # for the post_process_functions self.batch_size = 2 self.num_queries = 3 self.num_classes = 2 self.height = 18 self.width = 18 self.num_labels = num_labels def prepare_image_processor_dict(self): return { "do_resize": self.do_resize, "size": self.size, "do_normalize": self.do_normalize, "image_mean": self.image_mean, "image_std": self.image_std, "do_pad": self.do_pad, "num_labels": self.num_labels, } def prepare_fake_eomt_outputs(self, batch_size, patch_offsets=None): return EomtForUniversalSegmentationOutput( masks_queries_logits=torch.randn((batch_size, self.num_queries, self.height, self.width)), class_queries_logits=torch.randn((batch_size, self.num_queries, self.num_classes + 1)), patch_offsets=patch_offsets, ) def prepare_image_inputs(self, equal_resolution=False, numpify=False, torchify=False): return prepare_image_inputs( batch_size=self.batch_size, num_channels=self.num_channels, min_resolution=self.min_resolution, max_resolution=self.max_resolution, equal_resolution=equal_resolution, numpify=numpify, torchify=torchify, ) def prepare_semantic_single_inputs(): ds = load_dataset("hf-internal-testing/fixtures_ade20k", split="test") example = ds[0] return example["image"], example["map"] def prepare_semantic_batch_inputs(): ds = load_dataset("hf-internal-testing/fixtures_ade20k", split="test") return list(ds["image"][:2]), list(ds["map"][:2]) @require_torch @require_vision class EomtImageProcessingTest(ImageProcessingTestMixin, unittest.TestCase): image_processing_class = EomtImageProcessor if is_vision_available() else None fast_image_processing_class = EomtImageProcessorFast if is_torchvision_available() else None def setUp(self): super().setUp() self.image_processor_tester = EomtImageProcessingTester(self) self.model_id = "tue-mps/coco_panoptic_eomt_large_640" @property def image_processor_dict(self): return self.image_processor_tester.prepare_image_processor_dict() def test_image_processor_properties(self): for image_processing_class in self.image_processor_list: image_processing = image_processing_class(**self.image_processor_dict) self.assertTrue(hasattr(image_processing, "image_mean")) self.assertTrue(hasattr(image_processing, "image_std")) self.assertTrue(hasattr(image_processing, "do_normalize")) self.assertTrue(hasattr(image_processing, "do_resize")) self.assertTrue(hasattr(image_processing, "size")) self.assertTrue(hasattr(image_processing, "do_rescale")) self.assertTrue(hasattr(image_processing, "rescale_factor")) self.assertTrue(hasattr(image_processing, "resample")) def test_image_processor_from_dict_with_kwargs(self): image_processor = self.image_processing_class.from_dict(self.image_processor_dict) self.assertEqual(image_processor.size, {"shortest_edge": 18, "longest_edge": 18}) image_processor = self.image_processing_class.from_dict(self.image_processor_dict, size=42) self.assertEqual(image_processor.size, {"shortest_edge": 42}) def test_call_numpy(self): for image_processing_class in self.image_processor_list: # Initialize image_processing image_processing = image_processing_class(**self.image_processor_dict) # create random numpy tensors image_inputs = self.image_processor_tester.prepare_image_inputs(equal_resolution=True, numpify=True) for image in image_inputs: self.assertIsInstance(image, np.ndarray) # Test not batched input encoded_images = image_processing(image_inputs[0], return_tensors="pt").pixel_values expected_output_image_shape = (1, 3, 18, 18) self.assertEqual(tuple(encoded_images.shape), expected_output_image_shape) # Test batched encoded_images = image_processing(image_inputs, return_tensors="pt").pixel_values expected_output_image_shape = (2, 3, 18, 18) self.assertEqual(tuple(encoded_images.shape), expected_output_image_shape) @unittest.skip(reason="Not supported") def test_call_numpy_4_channels(self): pass def test_call_pil(self): image_processing = self.image_processing_class(**self.image_processor_dict) image_inputs = self.image_processor_tester.prepare_image_inputs(equal_resolution=True) for image in image_inputs: self.assertIsInstance(image, Image.Image) # Test Non batched input encoded_images = image_processing(image_inputs[0], return_tensors="pt").pixel_values expected_output_image_shape = (1, 3, 18, 18) self.assertEqual(tuple(encoded_images.shape), expected_output_image_shape) # Test batched encoded_images = image_processing(image_inputs, return_tensors="pt").pixel_values expected_output_image_shape = (2, 3, 18, 18) self.assertEqual(tuple(encoded_images.shape), expected_output_image_shape) def test_call_pytorch(self): image_processing = self.image_processing_class(**self.image_processor_dict) image_inputs = self.image_processor_tester.prepare_image_inputs(equal_resolution=True, torchify=True) for image in image_inputs: self.assertIsInstance(image, torch.Tensor) encoded_images = image_processing(image_inputs[0], return_tensors="pt").pixel_values expected_output_image_shape = (1, 3, 18, 18) self.assertEqual(tuple(encoded_images.shape), expected_output_image_shape) encoded_images = image_processing(image_inputs, return_tensors="pt").pixel_values expected_output_image_shape = (2, 3, 18, 18) self.assertEqual(tuple(encoded_images.shape), expected_output_image_shape) def test_slow_fast_equivalence(self): if not self.test_slow_image_processor or not self.test_fast_image_processor: self.skipTest(reason="Skipping slow/fast equivalence test") if self.image_processing_class is None or self.fast_image_processing_class is None: self.skipTest(reason="Skipping slow/fast equivalence test as one of the image processors is not defined") dummy_image, dummy_map = prepare_semantic_single_inputs() image_processor_slow = self.image_processing_class(**self.image_processor_dict) image_processor_fast = self.fast_image_processing_class(**self.image_processor_dict) image_encoding_slow = image_processor_slow(dummy_image, segmentation_maps=dummy_map, return_tensors="pt") image_encoding_fast = image_processor_fast(dummy_image, segmentation_maps=dummy_map, return_tensors="pt") self.assertTrue(torch.allclose(image_encoding_slow.pixel_values, image_encoding_fast.pixel_values, atol=1e-1)) self.assertLessEqual( torch.mean(torch.abs(image_encoding_slow.pixel_values - image_encoding_fast.pixel_values)).item(), 1e-3 ) # Lets check whether 99.9% of mask_labels values match or not. match_ratio = (image_encoding_slow.mask_labels[0] == image_encoding_fast.mask_labels[0]).float().mean().item() self.assertGreaterEqual(match_ratio, 0.999, "Mask labels do not match between slow and fast image processor.") def test_slow_fast_equivalence_batched(self): if not self.test_slow_image_processor or not self.test_fast_image_processor: self.skipTest(reason="Skipping slow/fast equivalence test") if self.image_processing_class is None or self.fast_image_processing_class is None: self.skipTest(reason="Skipping slow/fast equivalence test as one of the image processors is not defined") if hasattr(self.image_processor_tester, "do_center_crop") and self.image_processor_tester.do_center_crop: self.skipTest( reason="Skipping as do_center_crop is True and center_crop functions are not equivalent for fast and slow processors" ) dummy_images, dummy_maps = prepare_semantic_batch_inputs() image_processor_slow = self.image_processing_class(**self.image_processor_dict) image_processor_fast = self.fast_image_processing_class(**self.image_processor_dict) encoding_slow = image_processor_slow(dummy_images, segmentation_maps=dummy_maps, return_tensors="pt") encoding_fast = image_processor_fast(dummy_images, segmentation_maps=dummy_maps, return_tensors="pt") self.assertTrue(torch.allclose(encoding_slow.pixel_values, encoding_fast.pixel_values, atol=1e-1)) self.assertLessEqual( torch.mean(torch.abs(encoding_slow.pixel_values - encoding_fast.pixel_values)).item(), 1e-3 ) for idx in range(len(dummy_maps)): match_ratio = (encoding_slow.mask_labels[idx] == encoding_fast.mask_labels[idx]).float().mean().item() self.assertGreaterEqual( match_ratio, 0.999, "Mask labels do not match between slow and fast image processors." ) def test_post_process_semantic_segmentation(self): processor = self.image_processing_class(**self.image_processor_dict) # Set longest_edge to None to test for semantic segmentatiom. processor.size = {"shortest_edge": 18, "longest_edge": None} image = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw) inputs = processor(images=image, do_split_image=True, return_tensors="pt") patch_offsets = inputs["patch_offsets"] target_sizes = [image.size[::-1]] # For semantic segmentation, the BS of output is 2 coz, two patches are created for the image. outputs = self.image_processor_tester.prepare_fake_eomt_outputs(inputs["pixel_values"].shape[0], patch_offsets) segmentation = processor.post_process_semantic_segmentation(outputs, target_sizes) self.assertEqual(segmentation[0].shape, (image.height, image.width)) def test_post_process_panoptic_segmentation(self): processor = self.image_processing_class(**self.image_processor_dict) image = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw) original_sizes = [image.size[::-1], image.size[::-1]] # lets test for batched input of 2 outputs = self.image_processor_tester.prepare_fake_eomt_outputs(2) segmentation = processor.post_process_panoptic_segmentation(outputs, original_sizes) self.assertTrue(len(segmentation) == 2) for el in segmentation: self.assertTrue("segmentation" in el) self.assertTrue("segments_info" in el) self.assertEqual(type(el["segments_info"]), list) self.assertEqual(el["segmentation"].shape, (image.height, image.width)) def test_post_process_instance_segmentation(self): processor = self.image_processing_class(**self.image_processor_dict) image = Image.open(requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw) original_sizes = [image.size[::-1], image.size[::-1]] # lets test for batched input of 2 outputs = self.image_processor_tester.prepare_fake_eomt_outputs(2) segmentation = processor.post_process_instance_segmentation(outputs, original_sizes) self.assertTrue(len(segmentation) == 2) for el in segmentation: self.assertTrue("segmentation" in el) self.assertTrue("segments_info" in el) self.assertEqual(type(el["segments_info"]), list) self.assertEqual(el["segmentation"].shape, (image.height, image.width))
transformers/tests/models/eomt/test_image_processing_eomt.py/0
{ "file_path": "transformers/tests/models/eomt/test_image_processing_eomt.py", "repo_id": "transformers", "token_count": 5742 }
579
# coding=utf-8 # Copyright 2025 The LG AI Research and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch EXAONE 4.0 model.""" import unittest import pytest from packaging import version from parameterized import parameterized from transformers import ( AutoTokenizer, Exaone4Config, GenerationConfig, is_torch_available, ) from transformers.testing_utils import ( cleanup, require_flash_attn, require_torch, require_torch_accelerator, slow, torch_device, ) from ...causal_lm_tester import CausalLMModelTest, CausalLMModelTester from ...test_configuration_common import ConfigTester if is_torch_available(): import torch from transformers import ( Exaone4ForCausalLM, Exaone4ForQuestionAnswering, Exaone4ForSequenceClassification, Exaone4ForTokenClassification, Exaone4Model, ) class Exaone4ModelTester(CausalLMModelTester): config_class = Exaone4Config if is_torch_available(): base_model_class = Exaone4Model causal_lm_class = Exaone4ForCausalLM sequence_class = Exaone4ForSequenceClassification token_class = Exaone4ForTokenClassification question_answering_class = Exaone4ForQuestionAnswering @require_torch class Exaone4ModelTest(CausalLMModelTest, unittest.TestCase): all_model_classes = ( ( Exaone4Model, Exaone4ForCausalLM, Exaone4ForSequenceClassification, Exaone4ForQuestionAnswering, Exaone4ForTokenClassification, ) if is_torch_available() else () ) pipeline_model_mapping = ( { "feature-extraction": Exaone4Model, "question-answering": Exaone4ForQuestionAnswering, "text-classification": Exaone4ForSequenceClassification, "text-generation": Exaone4ForCausalLM, "zero-shot": Exaone4ForSequenceClassification, "token-classification": Exaone4ForTokenClassification, } if is_torch_available() else {} ) test_headmasking = False test_pruning = False fx_compatible = False # Broken by attention refactor cc @Cyrilvallez model_tester_class = Exaone4ModelTester model_split_percents = [0.5, 0.6] def setUp(self): self.model_tester = Exaone4ModelTester(self) self.config_tester = ConfigTester(self, config_class=Exaone4Config, hidden_size=37) @unittest.skip("Failing because of unique cache (HybridCache)") def test_model_outputs_equivalence(self, **kwargs): pass @parameterized.expand([("random",), ("same",)]) @pytest.mark.generate @unittest.skip("EXAONE 4.0 has HybridCache which is not compatible with assisted decoding") def test_assisted_decoding_matches_greedy_search(self, assistant_type): pass @unittest.skip("EXAONE 4.0 has HybridCache which is not compatible with assisted decoding") def test_prompt_lookup_decoding_matches_greedy_search(self, assistant_type): pass @pytest.mark.generate @unittest.skip("EXAONE 4.0 has HybridCache which is not compatible with assisted decoding") def test_assisted_decoding_sample(self): pass @unittest.skip("EXAONE 4.0 has HybridCache which is not compatible with dola decoding") def test_dola_decoding_sample(self): pass @unittest.skip("EXAONE 4.0 has HybridCache and doesn't support continue from past kv") def test_generate_continue_from_past_key_values(self): pass @unittest.skip("EXAONE 4.0 has HybridCache and doesn't support low_memory generation") def test_beam_search_low_memory(self): pass @unittest.skip("EXAONE 4.0 has HybridCache and doesn't support contrastive generation") def test_contrastive_generate(self): pass @unittest.skip("EXAONE 4.0 has HybridCache and doesn't support contrastive generation") def test_contrastive_generate_dict_outputs_use_cache(self): pass @unittest.skip("EXAONE 4.0 has HybridCache and doesn't support contrastive generation") def test_contrastive_generate_low_memory(self): pass @unittest.skip( "EXAONE 4.0 has HybridCache and doesn't support StaticCache. Though it could, it shouldn't support." ) def test_generate_with_static_cache(self): pass @unittest.skip( "EXAONE 4.0 has HybridCache and doesn't support StaticCache. Though it could, it shouldn't support." ) def test_generate_from_inputs_embeds_with_static_cache(self): pass @unittest.skip( "EXAONE 4.0 has HybridCache and doesn't support StaticCache. Though it could, it shouldn't support." ) def test_generate_continue_from_inputs_embeds(self): pass @unittest.skip("EXAONE 4.0 has HybridCache which auto-compiles. Compile and FA2 don't work together.") def test_eager_matches_fa2_generate(self): pass @unittest.skip( reason="HybridCache can't be gathered because it is not iterable. Adding a simple iter and dumping `distributed_iterator`" " as in Dynamic Cache doesnt work. NOTE: @gante all cache objects would need better compatibility with multi gpu setting" ) def test_multi_gpu_data_parallel_forward(self): pass @require_torch class Exaone4IntegrationTest(unittest.TestCase): TEST_MODEL_ID = "LGAI-EXAONE/EXAONE-4.0-Instruct" # dummy model id def tearDown(self): # TODO (joao): automatic compilation, i.e. compilation when `cache_implementation="static"` is used, leaves # some memory allocated in the cache, which means some object is not being released properly. This causes some # unoptimal memory usage, e.g. after certain teruff format examples tests src utilssts a 7B model in FP16 no longer fits in a 24GB GPU. # Investigate the root cause. cleanup(torch_device, gc_collect=True) @slow def test_model_logits(self): input_ids = [405, 7584, 79579, 76636, 2907, 94640, 373] model = Exaone4ForCausalLM.from_pretrained( self.TEST_MODEL_ID, device_map="auto", dtype=torch.float16, attn_implementation="eager" ) input_ids = torch.tensor([input_ids]).to(model.model.embed_tokens.weight.device) with torch.no_grad(): out = model(input_ids).logits.float().cpu() EXPECTED_MEAN = torch.tensor([[13.9380, 12.9951, 12.9442, 10.6576, 11.0901, 12.1466, 9.2482]]) EXPECTED_SLICE = torch.tensor( [ 4.9180, 11.6406, 21.1250, 13.4062, 20.8438, 18.0625, 17.9688, 18.7812, 18.0156, 18.3594, 18.5000, 19.1719, 18.5156, 19.3438, 19.5000, 20.6406, 19.4844, 19.2812, 19.4688, 20.0156, 19.8438, 19.9531, 19.7188, 20.5938, 20.5312, 20.1250, 20.4062, 21.4062, 21.2344, 20.7656, ] ) torch.testing.assert_close(out.mean(-1), EXPECTED_MEAN, atol=1e-2, rtol=1e-2) torch.testing.assert_close(out[0, 0, :30], EXPECTED_SLICE, atol=1e-4, rtol=1e-4) del model cleanup(torch_device, gc_collect=True) @slow def test_model_logits_bf16(self): input_ids = [405, 7584, 79579, 76636, 2907, 94640, 373] model = Exaone4ForCausalLM.from_pretrained( self.TEST_MODEL_ID, device_map="auto", dtype=torch.bfloat16, attn_implementation="eager" ) input_ids = torch.tensor([input_ids]).to(model.model.embed_tokens.weight.device) with torch.no_grad(): out = model(input_ids).logits.float().cpu() EXPECTED_MEAN = torch.tensor([[13.8797, 13.0799, 12.9665, 10.7712, 11.1006, 12.2406, 9.3248]]) EXPECTED_SLICE = torch.tensor( [ 4.8750, 11.6250, 21.0000, 13.3125, 20.8750, 18.0000, 18.0000, 18.7500, 18.0000, 18.3750, 18.5000, 19.1250, 18.5000, 19.3750, 19.5000, 20.6250, 19.5000, 19.2500, 19.5000, 20.0000, 19.8750, 19.8750, 19.7500, 20.6250, 20.5000, 20.1250, 20.3750, 21.3750, 21.2500, 20.7500, ] ) torch.testing.assert_close(out.mean(-1), EXPECTED_MEAN, atol=1e-2, rtol=1e-2) torch.testing.assert_close(out[0, 0, :30], EXPECTED_SLICE, atol=1e-4, rtol=1e-4) del model cleanup(torch_device, gc_collect=True) @slow def test_model_generation(self): EXPECTED_TEXT = "Tell me about the Miracle on the Han river.\n\nThe Miracle on the Han River is a story about the miracle of the Korean War Armistice. The story is told by a Korean soldier who is a witness to the armistice negotiations. He is reluctant to tell the story because he does not want to be a hypocrite, but he feels that everyone should know what really happened.\n\nThe Korean War began on June 25, 1950, when North Korean troops invaded South Korea. Soon the United Nations troops, primarily from South Korea, were in support of the United States. The war was still ongoing when North Korean troops stopped their advance" prompt = "Tell me about the Miracle on the Han river." tokenizer = AutoTokenizer.from_pretrained(self.TEST_MODEL_ID) model = Exaone4ForCausalLM.from_pretrained( self.TEST_MODEL_ID, device_map="auto", dtype=torch.float16, attn_implementation="eager" ) input_ids = tokenizer.encode(prompt, return_tensors="pt").to(model.model.embed_tokens.weight.device) # greedy generation outputs generated_ids = model.generate(input_ids, max_new_tokens=128, temperature=0) text = tokenizer.decode(generated_ids[0], skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT, text) del model cleanup(torch_device, gc_collect=True) @slow def test_model_generation_bf16_sdpa(self): EXPECTED_TEXT = "Tell me about the Miracle on the Han river.\n\nThe Miracle on the Han River is a story about the miracle of the Korean War Armistice.\n\nThe Korean War broke out in 35 years ago in 1950. The war was the result of the ideological conflict between the communist north and the capitalist south. The war was brought to a halt in 1953. There was to be peace talks but no peace treaty. As a result of the stalemate the Korean people have neither a peace treaty nor a reunification nor a democratization of Korea. The stalemate of 35 years has produced a people of 70 million" prompt = "Tell me about the Miracle on the Han river." tokenizer = AutoTokenizer.from_pretrained(self.TEST_MODEL_ID) model = Exaone4ForCausalLM.from_pretrained( self.TEST_MODEL_ID, device_map="auto", dtype=torch.bfloat16, attn_implementation="sdpa" ) input_ids = tokenizer.encode(prompt, return_tensors="pt").to(model.model.embed_tokens.weight.device) # greedy generation outputs generated_ids = model.generate(input_ids, max_new_tokens=128, temperature=0) text = tokenizer.decode(generated_ids[0], skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT, text) del model cleanup(torch_device, gc_collect=True) @slow @require_torch_accelerator @require_flash_attn def test_model_generation_long_flash(self): EXPECTED_OUTPUT_TOKEN_IDS = [433, 9055] input_ids = [433, 9055] * 2048 model = Exaone4ForCausalLM.from_pretrained( self.TEST_MODEL_ID, device_map="auto", dtype=torch.float16, attn_implementation="flash_attention_2" ) input_ids = torch.tensor([input_ids]).to(model.model.embed_tokens.weight.device) generated_ids = model.generate(input_ids, max_new_tokens=4, temperature=0) self.assertEqual(EXPECTED_OUTPUT_TOKEN_IDS, generated_ids[0][-2:].tolist()) del model cleanup(torch_device, gc_collect=True) @slow @require_torch_accelerator def test_model_generation_beyond_sliding_window(self): EXPECTED_TEXT_COMPLETION = ( " but I'm not sure if I'm going to be able to see it. I really enjoy the scenery, but I'm not sure if I" ) tokenizer = AutoTokenizer.from_pretrained(self.TEST_MODEL_ID) prompt = "This is a nice place. " * 700 + "I really enjoy the scenery," model = Exaone4ForCausalLM.from_pretrained( self.TEST_MODEL_ID, device_map="auto", dtype=torch.float16, attn_implementation="sdpa" ) input_ids = tokenizer.encode(prompt, return_tensors="pt").to(model.model.embed_tokens.weight.device) generated_ids = model.generate(input_ids, max_new_tokens=32, temperature=0) text = tokenizer.decode(generated_ids[0, -32:], skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT_COMPLETION, text) del model cleanup(torch_device, gc_collect=True) @pytest.mark.torch_export_test @slow def test_export_static_cache(self): if version.parse(torch.__version__) < version.parse("2.4.0"): self.skipTest(reason="This test requires torch >= 2.4 to run.") from transformers.integrations.executorch import ( TorchExportableModuleWithStaticCache, convert_and_export_with_cache, ) tokenizer = AutoTokenizer.from_pretrained(self.TEST_MODEL_ID, padding_side="right") EXPECTED_TEXT_COMPLETION = [ "The Deep Learning is 100% free and easy to use.\n\n## How to use Deep Learning?\n\n" ] max_generation_length = tokenizer(EXPECTED_TEXT_COMPLETION, return_tensors="pt", padding=True)[ "input_ids" ].shape[-1] # Load model device = "cpu" dtype = torch.bfloat16 cache_implementation = "static" attn_implementation = "sdpa" batch_size = 1 model = Exaone4ForCausalLM.from_pretrained( self.TEST_MODEL_ID, device_map=device, dtype=dtype, attn_implementation=attn_implementation, generation_config=GenerationConfig( use_cache=True, cache_implementation=cache_implementation, max_length=max_generation_length, cache_config={ "batch_size": batch_size, "max_cache_len": max_generation_length, }, ), ) prompt = ["The Deep Learning is "] prompt_tokens = tokenizer(prompt, return_tensors="pt", padding=True).to(model.device) prompt_token_ids = prompt_tokens["input_ids"] max_new_tokens = max_generation_length - prompt_token_ids.shape[-1] # Static Cache + export exported_program = convert_and_export_with_cache(model) ep_generated_ids = TorchExportableModuleWithStaticCache.generate( exported_program=exported_program, prompt_token_ids=prompt_token_ids, max_new_tokens=max_new_tokens ) ep_generated_text = tokenizer.batch_decode(ep_generated_ids, skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT_COMPLETION, ep_generated_text)
transformers/tests/models/exaone4/test_modeling_exaone4.py/0
{ "file_path": "transformers/tests/models/exaone4/test_modeling_exaone4.py", "repo_id": "transformers", "token_count": 7362 }
580
# Copyright 2022 Meta Platforms authors and The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import os import random import shutil import tempfile import unittest import pytest from transformers import BertTokenizer, BertTokenizerFast from transformers.models.bert.tokenization_bert import VOCAB_FILES_NAMES from transformers.testing_utils import require_vision from transformers.utils import IMAGE_PROCESSOR_NAME, is_vision_available from ...test_processing_common import ProcessorTesterMixin if is_vision_available(): from transformers import FlavaImageProcessor, FlavaProcessor from transformers.models.flava.image_processing_flava import ( FLAVA_CODEBOOK_MEAN, FLAVA_CODEBOOK_STD, FLAVA_IMAGE_MEAN, FLAVA_IMAGE_STD, ) @require_vision class FlavaProcessorTest(ProcessorTesterMixin, unittest.TestCase): processor_class = FlavaProcessor def setUp(self): self.tmpdirname = tempfile.mkdtemp() vocab_tokens = ["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]", "want", "##want", "##ed", "wa", "un", "runn", "##ing", ",", "low", "lowest"] # fmt: skip self.vocab_file = os.path.join(self.tmpdirname, VOCAB_FILES_NAMES["vocab_file"]) with open(self.vocab_file, "w", encoding="utf-8") as fp: fp.write("".join([x + "\n" for x in vocab_tokens])) image_processor_map = { "image_mean": FLAVA_IMAGE_MEAN, "image_std": FLAVA_IMAGE_STD, "do_normalize": True, "do_resize": True, "size": 224, "do_center_crop": True, "crop_size": 224, "input_size_patches": 14, "total_mask_patches": 75, "mask_group_max_patches": None, "mask_group_min_patches": 16, "mask_group_min_aspect_ratio": 0.3, "mask_group_max_aspect_ratio": None, "codebook_do_resize": True, "codebook_size": 112, "codebook_do_center_crop": True, "codebook_crop_size": 112, "codebook_do_map_pixels": True, "codebook_do_normalize": True, "codebook_image_mean": FLAVA_CODEBOOK_MEAN, "codebook_image_std": FLAVA_CODEBOOK_STD, } self.image_processor_file = os.path.join(self.tmpdirname, IMAGE_PROCESSOR_NAME) with open(self.image_processor_file, "w", encoding="utf-8") as fp: json.dump(image_processor_map, fp) def get_tokenizer(self, **kwargs): return BertTokenizer.from_pretrained(self.tmpdirname, **kwargs) def get_rust_tokenizer(self, **kwargs): return BertTokenizerFast.from_pretrained(self.tmpdirname, **kwargs) def get_image_processor(self, **kwargs): return FlavaImageProcessor.from_pretrained(self.tmpdirname, **kwargs) def tearDown(self): shutil.rmtree(self.tmpdirname) def test_save_load_pretrained_default(self): tokenizer_slow = self.get_tokenizer() tokenizer_fast = self.get_rust_tokenizer() image_processor = self.get_image_processor() processor_slow = FlavaProcessor(tokenizer=tokenizer_slow, image_processor=image_processor) processor_slow.save_pretrained(self.tmpdirname) processor_slow = FlavaProcessor.from_pretrained(self.tmpdirname, use_fast=False) processor_fast = FlavaProcessor(tokenizer=tokenizer_fast, image_processor=image_processor) processor_fast.save_pretrained(self.tmpdirname) processor_fast = FlavaProcessor.from_pretrained(self.tmpdirname) self.assertEqual(processor_slow.tokenizer.get_vocab(), tokenizer_slow.get_vocab()) self.assertEqual(processor_fast.tokenizer.get_vocab(), tokenizer_fast.get_vocab()) self.assertEqual(tokenizer_slow.get_vocab(), tokenizer_fast.get_vocab()) self.assertIsInstance(processor_slow.tokenizer, BertTokenizer) self.assertIsInstance(processor_fast.tokenizer, BertTokenizerFast) self.assertEqual(processor_slow.image_processor.to_json_string(), image_processor.to_json_string()) self.assertEqual(processor_fast.image_processor.to_json_string(), image_processor.to_json_string()) self.assertIsInstance(processor_slow.image_processor, FlavaImageProcessor) self.assertIsInstance(processor_fast.image_processor, FlavaImageProcessor) def test_save_load_pretrained_additional_features(self): processor = FlavaProcessor(tokenizer=self.get_tokenizer(), image_processor=self.get_image_processor()) processor.save_pretrained(self.tmpdirname) tokenizer_add_kwargs = self.get_tokenizer(bos_token="(BOS)", eos_token="(EOS)") image_processor_add_kwargs = self.get_image_processor(do_normalize=False, padding_value=1.0) processor = FlavaProcessor.from_pretrained( self.tmpdirname, bos_token="(BOS)", eos_token="(EOS)", do_normalize=False, padding_value=1.0 ) self.assertEqual(processor.tokenizer.get_vocab(), tokenizer_add_kwargs.get_vocab()) self.assertIsInstance(processor.tokenizer, BertTokenizerFast) self.assertEqual(processor.image_processor.to_json_string(), image_processor_add_kwargs.to_json_string()) self.assertIsInstance(processor.image_processor, FlavaImageProcessor) def test_image_processor(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = FlavaProcessor(tokenizer=tokenizer, image_processor=image_processor) image_input = self.prepare_image_inputs() input_feat_extract = image_processor(image_input, return_tensors="np") input_processor = processor(images=image_input, return_tensors="np") for key in input_feat_extract: self.assertAlmostEqual(input_feat_extract[key].sum(), input_processor[key].sum(), delta=1e-2) # With rest of the args random.seed(1234) input_feat_extract = image_processor( image_input, return_image_mask=True, return_codebook_pixels=True, return_tensors="np" ) random.seed(1234) input_processor = processor( images=image_input, return_image_mask=True, return_codebook_pixels=True, return_tensors="np" ) for key in input_feat_extract: self.assertAlmostEqual(input_feat_extract[key].sum(), input_processor[key].sum(), delta=1e-2) def test_tokenizer(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = FlavaProcessor(tokenizer=tokenizer, image_processor=image_processor) input_str = "lower newer" encoded_processor = processor(text=input_str) encoded_tok = tokenizer(input_str) for key in encoded_tok: self.assertListEqual(encoded_tok[key], encoded_processor[key]) def test_processor(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = FlavaProcessor(tokenizer=tokenizer, image_processor=image_processor) input_str = "lower newer" image_input = self.prepare_image_inputs() inputs = processor(text=input_str, images=image_input) self.assertListEqual(list(inputs.keys()), ["input_ids", "token_type_ids", "attention_mask", "pixel_values"]) # add extra args inputs = processor(text=input_str, images=image_input, return_codebook_pixels=True, return_image_mask=True) self.assertListEqual( list(inputs.keys()), [ "input_ids", "token_type_ids", "attention_mask", "pixel_values", "codebook_pixel_values", "bool_masked_pos", ], ) # test if it raises when no input is passed with pytest.raises(ValueError): processor() def test_tokenizer_decode(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = FlavaProcessor(tokenizer=tokenizer, image_processor=image_processor) predicted_ids = [[1, 4, 5, 8, 1, 0, 8], [3, 4, 3, 1, 1, 8, 9]] decoded_processor = processor.batch_decode(predicted_ids) decoded_tok = tokenizer.batch_decode(predicted_ids) self.assertListEqual(decoded_tok, decoded_processor)
transformers/tests/models/flava/test_processing_flava.py/0
{ "file_path": "transformers/tests/models/flava/test_processing_flava.py", "repo_id": "transformers", "token_count": 3705 }
581
import unittest import numpy as np from transformers import is_torch_available, is_vision_available from transformers.testing_utils import ( require_torch, require_torchvision, require_vision, ) if is_torch_available() and is_vision_available(): import torch from transformers import FuyuImageProcessor if is_vision_available(): from PIL import Image @require_torch @require_vision @require_torchvision class TestFuyuImageProcessor(unittest.TestCase): def setUp(self): self.size = {"height": 160, "width": 320} self.processor = FuyuImageProcessor(size=self.size, padding_value=1.0) self.batch_size = 3 self.channels = 3 self.height = 300 self.width = 300 self.image_input = torch.rand(self.batch_size, self.channels, self.height, self.width) self.image_patch_dim_h = 30 self.image_patch_dim_w = 30 self.sample_image = np.zeros((450, 210, 3), dtype=np.uint8) self.sample_image_pil = Image.fromarray(self.sample_image) def test_patches(self): expected_num_patches = self.processor.get_num_patches(image_height=self.height, image_width=self.width) patches_final = self.processor.patchify_image(image=self.image_input) assert patches_final.shape[1] == expected_num_patches, ( f"Expected {expected_num_patches} patches, got {patches_final.shape[1]}." ) def test_scale_to_target_aspect_ratio(self): # (h:450, w:210) fitting (160, 320) -> (160, 210*160/450) scaled_image = self.processor.resize(self.sample_image, size=self.size) self.assertEqual(scaled_image.shape[0], 160) self.assertEqual(scaled_image.shape[1], 74) def test_apply_transformation_numpy(self): transformed_image = self.processor.preprocess(self.sample_image).images[0][0] self.assertEqual(transformed_image.shape[1], 160) self.assertEqual(transformed_image.shape[2], 320) def test_apply_transformation_pil(self): transformed_image = self.processor.preprocess(self.sample_image_pil).images[0][0] self.assertEqual(transformed_image.shape[1], 160) self.assertEqual(transformed_image.shape[2], 320)
transformers/tests/models/fuyu/test_image_processing_fuyu.py/0
{ "file_path": "transformers/tests/models/fuyu/test_image_processing_fuyu.py", "repo_id": "transformers", "token_count": 907 }
582
# Copyright 2022 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import numpy as np from transformers.testing_utils import require_torch, require_vision from transformers.utils import is_torch_available, is_vision_available from ...test_image_processing_common import ImageProcessingTestMixin, prepare_image_inputs if is_torch_available(): import torch if is_vision_available(): from PIL import Image from transformers import GLPNImageProcessor class GLPNImageProcessingTester: def __init__( self, parent, batch_size=7, num_channels=3, image_size=18, min_resolution=30, max_resolution=400, do_resize=True, size_divisor=32, do_rescale=True, ): self.parent = parent self.batch_size = batch_size self.num_channels = num_channels self.image_size = image_size self.min_resolution = min_resolution self.max_resolution = max_resolution self.do_resize = do_resize self.size_divisor = size_divisor self.do_rescale = do_rescale def prepare_image_processor_dict(self): return { "do_resize": self.do_resize, "size_divisor": self.size_divisor, "do_rescale": self.do_rescale, } def expected_output_image_shape(self, images): if isinstance(images[0], Image.Image): width, height = images[0].size elif isinstance(images[0], np.ndarray): height, width = images[0].shape[0], images[0].shape[1] else: height, width = images[0].shape[1], images[0].shape[2] height = height // self.size_divisor * self.size_divisor width = width // self.size_divisor * self.size_divisor return self.num_channels, height, width def prepare_image_inputs(self, equal_resolution=False, numpify=False, torchify=False): return prepare_image_inputs( batch_size=self.batch_size, num_channels=self.num_channels, min_resolution=self.min_resolution, max_resolution=self.max_resolution, size_divisor=self.size_divisor, equal_resolution=equal_resolution, numpify=numpify, torchify=torchify, ) @require_torch @require_vision class GLPNImageProcessingTest(ImageProcessingTestMixin, unittest.TestCase): image_processing_class = GLPNImageProcessor if is_vision_available() else None def setUp(self): super().setUp() self.image_processor_tester = GLPNImageProcessingTester(self) @property def image_processor_dict(self): return self.image_processor_tester.prepare_image_processor_dict() def test_image_processor_properties(self): image_processing = self.image_processing_class(**self.image_processor_dict) self.assertTrue(hasattr(image_processing, "do_resize")) self.assertTrue(hasattr(image_processing, "size_divisor")) self.assertTrue(hasattr(image_processing, "resample")) self.assertTrue(hasattr(image_processing, "do_rescale")) def test_call_pil(self): # Initialize image_processing image_processing = self.image_processing_class(**self.image_processor_dict) # create random PIL images image_inputs = self.image_processor_tester.prepare_image_inputs(equal_resolution=False) for image in image_inputs: self.assertIsInstance(image, Image.Image) # Test not batched input (GLPNImageProcessor doesn't support batching) encoded_images = image_processing(image_inputs[0], return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_inputs) self.assertTrue(tuple(encoded_images.shape) == (1, *expected_output_image_shape)) def test_call_numpy(self): # Initialize image_processing image_processing = self.image_processing_class(**self.image_processor_dict) # create random numpy tensors image_inputs = self.image_processor_tester.prepare_image_inputs(equal_resolution=False, numpify=True) for image in image_inputs: self.assertIsInstance(image, np.ndarray) # Test not batched input (GLPNImageProcessor doesn't support batching) encoded_images = image_processing(image_inputs[0], return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_inputs) self.assertTrue(tuple(encoded_images.shape) == (1, *expected_output_image_shape)) def test_call_pytorch(self): # Initialize image_processing image_processing = self.image_processing_class(**self.image_processor_dict) # create random PyTorch tensors image_inputs = self.image_processor_tester.prepare_image_inputs(equal_resolution=False, torchify=True) for image in image_inputs: self.assertIsInstance(image, torch.Tensor) # Test not batched input (GLPNImageProcessor doesn't support batching) encoded_images = image_processing(image_inputs[0], return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_inputs) self.assertTrue(tuple(encoded_images.shape) == (1, *expected_output_image_shape)) def test_call_numpy_4_channels(self): # Initialize image_processing image_processing = self.image_processing_class(**self.image_processor_dict) # create random numpy tensors self.image_processing_class.num_channels = 4 image_inputs = self.image_processor_tester.prepare_image_inputs(equal_resolution=False, numpify=True) for image in image_inputs: self.assertIsInstance(image, np.ndarray) # Test not batched input (GLPNImageProcessor doesn't support batching) encoded_images = image_processing(image_inputs[0], return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_inputs) self.assertTrue(tuple(encoded_images.shape) == (1, *expected_output_image_shape)) self.image_processing_class.num_channels = 3
transformers/tests/models/glpn/test_image_processing_glpn.py/0
{ "file_path": "transformers/tests/models/glpn/test_image_processing_glpn.py", "repo_id": "transformers", "token_count": 2663 }
583
# Copyright 2022 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch GPTNeoXJapanese model.""" import unittest from transformers import GPTNeoXJapaneseConfig, is_torch_available from transformers.models.gpt_neox_japanese.tokenization_gpt_neox_japanese import GPTNeoXJapaneseTokenizer from transformers.testing_utils import require_torch, slow, torch_device from ...generation.test_utils import GenerationTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, ids_tensor, random_attention_mask from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseModel class GPTNeoXJapaneseModelTester: def __init__( self, parent, batch_size=13, seq_length=7, is_training=True, use_input_mask=True, use_token_type_ids=True, use_labels=True, vocab_size=99, hidden_size=32, num_hidden_layers=2, num_attention_heads=4, intermediate_multiple_size=4, hidden_act="gelu", hidden_dropout=0.0, attention_dropout=0.1, weight_tying=True, max_position_embeddings=512, type_vocab_size=16, type_sequence_label_size=2, initializer_range=0.02, num_labels=3, num_choices=4, bos_token_id=1, eos_token_id=0, scope=None, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_input_mask = use_input_mask self.use_token_type_ids = use_token_type_ids self.use_labels = use_labels self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_multiple_size = intermediate_multiple_size self.hidden_act = hidden_act self.hidden_dropout = hidden_dropout self.attention_dropout = attention_dropout self.weight_tying = weight_tying self.max_position_embeddings = max_position_embeddings self.type_vocab_size = type_vocab_size self.type_sequence_label_size = type_sequence_label_size self.initializer_range = initializer_range self.num_labels = num_labels self.num_choices = num_choices self.scope = scope self.eos_token_id = eos_token_id self.bos_token_id = bos_token_id def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) input_mask = None if self.use_input_mask: input_mask = random_attention_mask([self.batch_size, self.seq_length]) token_labels = None if self.use_labels: token_labels = ids_tensor([self.batch_size, self.seq_length], self.num_labels) config = self.get_config() return config, input_ids, input_mask, token_labels def get_config(self): return GPTNeoXJapaneseConfig( vocab_size=self.vocab_size, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, intermediate_multiple_size=self.intermediate_multiple_size, hidden_act=self.hidden_act, hidden_dropout=self.hidden_dropout, attention_dropout=self.attention_dropout, weight_tying=self.weight_tying, max_position_embeddings=self.max_position_embeddings, type_vocab_size=self.type_vocab_size, is_decoder=False, initializer_range=self.initializer_range, eos_token_id=self.eos_token_id, bos_token_id=self.bos_token_id, ) def prepare_config_and_inputs_for_decoder(self): config, input_ids, input_mask, token_labels = self.prepare_config_and_inputs() config.is_decoder = True return config, input_ids, input_mask, token_labels def create_and_check_model(self, config, input_ids, input_mask): model = GPTNeoXJapaneseModel(config=config) model.to(torch_device) model.eval() _ = model(input_ids, attention_mask=input_mask) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) def create_and_check_model_as_decoder(self, config, input_ids, input_mask): config.add_cross_attention = True model = GPTNeoXJapaneseModel(config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=input_mask) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) def create_and_check_for_causal_lm(self, config, input_ids, input_mask, token_labels): model = GPTNeoXJapaneseForCausalLM(config=config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=input_mask, labels=token_labels) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) def create_and_check_decoder_model_past_large_inputs(self, config, input_ids, input_mask): config.is_decoder = True model = GPTNeoXJapaneseForCausalLM(config=config) model.to(torch_device) model.eval() # first forward pass outputs = model(input_ids, attention_mask=input_mask, use_cache=True) past_key_values = outputs.past_key_values # create hypothetical multiple next token and extent to next_input_ids next_tokens = ids_tensor((self.batch_size, 3), config.vocab_size) next_mask = ids_tensor((self.batch_size, 3), vocab_size=2) # append to next input_ids and next_input_ids = torch.cat([input_ids, next_tokens], dim=-1) next_attention_mask = torch.cat([input_mask, next_mask], dim=-1) output_from_no_past = model(next_input_ids, attention_mask=next_attention_mask, output_hidden_states=True) output_from_no_past = output_from_no_past["hidden_states"][0] output_from_past = model( next_tokens, attention_mask=next_attention_mask, past_key_values=past_key_values, output_hidden_states=True, )["hidden_states"][0] # select random slice random_slice_idx = ids_tensor((1,), output_from_past.shape[-1]).item() output_from_no_past_slice = output_from_no_past[:, -3:, random_slice_idx].detach() output_from_past_slice = output_from_past[:, :, random_slice_idx].detach() self.parent.assertTrue(output_from_past_slice.shape[1] == next_tokens.shape[1]) # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, input_ids, input_mask, token_labels = config_and_inputs inputs_dict = {"input_ids": input_ids, "attention_mask": input_mask} return config, inputs_dict @require_torch class GPTNeoXModelJapaneseTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (GPTNeoXJapaneseModel, GPTNeoXJapaneseForCausalLM) if is_torch_available() else () pipeline_model_mapping = ( {"feature-extraction": GPTNeoXJapaneseModel, "text-generation": GPTNeoXJapaneseForCausalLM} if is_torch_available() else {} ) test_pruning = False test_missing_keys = False test_model_parallel = False test_head_masking = False def setUp(self): self.model_tester = GPTNeoXJapaneseModelTester(self) self.config_tester = ConfigTester(self, config_class=GPTNeoXJapaneseConfig, hidden_size=37) def test_config(self): self.config_tester.run_common_tests() def test_model(self): config, input_ids, input_mask, token_labels = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(config, input_ids, input_mask) def test_model_as_decoder(self): config, input_ids, input_mask, token_labels = self.model_tester.prepare_config_and_inputs_for_decoder() self.model_tester.create_and_check_model_as_decoder(config, input_ids, input_mask) def test_model_as_decoder_with_default_input_mask(self): config, input_ids, input_mask, token_labels = self.model_tester.prepare_config_and_inputs_for_decoder() input_mask = None self.model_tester.create_and_check_model_as_decoder(config, input_ids, input_mask) def test_decoder_model_past_large_inputs(self): config, input_ids, input_mask, token_labels = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_decoder_model_past_large_inputs(config, input_ids, input_mask) def test_model_for_causal_lm(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_causal_lm(*config_and_inputs) @slow def test_generation(self): model_id = "abeja/gpt-neox-japanese-2.7b" prompts = ["データサイエンティストとは、", "100年後に必要とされる会社は、", "フルリモートの環境で働くために必要なことは、", "国境の長いトンネルを抜けると", "美味しい日本食といえば、"] # fmt: skip EXPECTED_OUTPUTS = [ "データサイエンティストとは、データを分析し、ビジネスに役立つ知見を導き出す専門家のことです。", "100年後に必要とされる会社は、「人」が中心の会社です。", "フルリモートの環境で働くために必要なことは、「自分の時間をコントロールする」ことです。", "国境の長いトンネルを抜けると、そこは雪国だった。", "美味しい日本食といえば、やっぱりお寿司ですよね。", ] tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained(model_id) model = GPTNeoXJapaneseForCausalLM.from_pretrained(model_id) predicted_outputs = [] for prompt in prompts: input_ids = tokenizer(prompt, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=50) generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) predicted_outputs += generated_string self.assertListEqual(predicted_outputs, EXPECTED_OUTPUTS) @unittest.skip("GPTNeoXJapanese applies bias to attention scores") def test_custom_4d_attention_mask(self): pass
transformers/tests/models/gpt_neox_japanese/test_modeling_gpt_neox_japanese.py/0
{ "file_path": "transformers/tests/models/gpt_neox_japanese/test_modeling_gpt_neox_japanese.py", "repo_id": "transformers", "token_count": 4997 }
584
# coding=utf-8 # Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch GraniteMoeHybrid model.""" import unittest import pytest from transformers import ( AutoTokenizer, GraniteMoeHybridConfig, is_torch_available, ) from transformers.testing_utils import ( require_torch, require_torch_gpu, slow, torch_device, ) from ...generation.test_utils import GenerationTesterMixin from ...models.bamba.test_modeling_bamba import BambaModelTest, BambaModelTester if is_torch_available(): import torch from transformers import ( GraniteMoeHybridForCausalLM, GraniteMoeHybridModel, ) class GraniteMoeHybridModelTester(BambaModelTester): config_class = GraniteMoeHybridConfig if is_torch_available(): model_class = GraniteMoeHybridModel for_causal_lm_class = GraniteMoeHybridForCausalLM def __init__( self, parent, use_cache=False, shared_intermediate_size=174, layer_types=None, ): super().__init__(parent) self.shared_intermediate_size = shared_intermediate_size self.layer_types = layer_types self.use_cache = use_cache def _update_layer_configs(self): super()._update_layer_configs() # GraniteMoeHybrid uses layer_types instead of attn_layer_indices self.layer_types = ["mamba"] * self.num_hidden_layers for idx in self.attn_layer_indices: self.layer_types[idx] = "attention" def get_config(self): return super().get_config( shared_intermediate_size=self.shared_intermediate_size, layer_types=self.layer_types, ) @require_torch class GraniteMoeHybridModelTest(BambaModelTest, GenerationTesterMixin, unittest.TestCase): model_tester_class = GraniteMoeHybridModelTester all_model_classes = ( ( GraniteMoeHybridModel, GraniteMoeHybridForCausalLM, ) if is_torch_available() else () ) pipeline_model_mapping = ( { "feature-extraction": GraniteMoeHybridModel, "text-generation": GraniteMoeHybridForCausalLM, } if is_torch_available() else {} ) def test_config_requires_mamba_or_attention_layers(self): """Ensure we can't create a config with disallowed layers.""" with pytest.raises(ValueError): GraniteMoeHybridConfig(layer_types=["not allowed!"]) # TODO (@alex-jw-brooks) - update this once the model(s) are out @unittest.skip(reason="GraniteMoeHybrid models are not yet released") @require_torch_gpu class GraniteMoeHybridIntegrationTest(unittest.TestCase): @slow def test_model_logits(self): input_ids = [31390, 631, 4162, 30, 322, 25342, 432, 1875, 43826, 10066, 688, 225] model = GraniteMoeHybridForCausalLM.from_pretrained("ibm-granite/granite-4.0-tiny", device_map="auto") with torch.no_grad(): out = model(torch.tensor([input_ids]).to(torch_device)) # fmt: off # Expected mean on dim = -1 EXPECTED_MEAN = torch.tensor([ [-2.9711, -2.2554, -1.0814, -1.6123, -0.8780, -1.0685, -0.6368, -1.9732, -3.3548, -2.6895, -2.3062, -2.6338] ]) torch.testing.assert_close(EXPECTED_MEAN.to(torch_device), out.logits.float().mean(-1), rtol=1e-2, atol=1e-2) # slicing logits[0, 0, 0:15] EXPECTED_SLICE = torch.tensor([ [4.0662, 5.9547, 3.5803, 3.1306, 4.3211, 3.8902, 4.6438, 8.5434, 7.5865, 5.1623, 5.2240, 9.2982, 5.9094, 6.8834, 5.7551], ]) # fmt: on self.assertTrue( torch.allclose( EXPECTED_SLICE.to(torch_device), out.logits[0, 0, :15].float(), atol=1e-3, rtol=1e-3, ) ) @slow def test_model_generation(self): EXPECTED_TEXT_COMPLETION = ( "Simply put, the theory of relativity states that 1) time is relative, and 2) space is relative. The first" ) prompt = "Simply put, the theory of relativity states that " tokenizer = AutoTokenizer.from_pretrained("ibm-granite/granite-4.0-tiny") model = GraniteMoeHybridForCausalLM.from_pretrained("ibm-granite/granite-4.0-tiny", device_map="auto") model_inputs = tokenizer(prompt, return_tensors="pt").to(model.device) # greedy generation outputs generated_ids = model.generate(**model_inputs, max_new_tokens=16, do_sample=False) text = tokenizer.decode(generated_ids[0], skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT_COMPLETION, text)
transformers/tests/models/granitemoehybrid/test_modeling_granitemoehybrid.py/0
{ "file_path": "transformers/tests/models/granitemoehybrid/test_modeling_granitemoehybrid.py", "repo_id": "transformers", "token_count": 2296 }
585
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch Hiera model.""" import math import unittest from transformers import HieraConfig from transformers.testing_utils import ( require_torch, require_vision, slow, torch_device, ) from transformers.utils import ( cached_property, is_torch_available, is_vision_available, ) from ...test_backbone_common import BackboneTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, floats_tensor, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from torch import nn from transformers import HieraBackbone, HieraForImageClassification, HieraForPreTraining, HieraModel if is_vision_available(): from PIL import Image from transformers import AutoImageProcessor class HieraModelTester: def __init__( self, parent, batch_size=13, image_size=[64, 64], mlp_ratio=1.0, num_channels=3, depths=[1, 1, 1, 1], patch_stride=[4, 4], patch_size=[7, 7], patch_padding=[3, 3], masked_unit_size=[8, 8], num_heads=[1, 1, 1, 1], embed_dim_multiplier=2.0, is_training=True, use_labels=True, embed_dim=8, hidden_act="gelu", decoder_hidden_size=2, decoder_depth=1, decoder_num_heads=1, initializer_range=0.02, scope=None, type_sequence_label_size=10, ): self.parent = parent self.batch_size = batch_size self.image_size = image_size self.mlp_ratio = mlp_ratio self.num_channels = num_channels self.depths = depths self.patch_stride = patch_stride self.patch_size = patch_size self.patch_padding = patch_padding self.masked_unit_size = masked_unit_size self.num_heads = num_heads self.embed_dim_multiplier = embed_dim_multiplier self.is_training = is_training self.use_labels = use_labels self.embed_dim = embed_dim self.hidden_act = hidden_act self.decoder_hidden_size = decoder_hidden_size self.decoder_depth = decoder_depth self.decoder_num_heads = decoder_num_heads self.initializer_range = initializer_range self.scope = scope self.type_sequence_label_size = type_sequence_label_size def prepare_config_and_inputs(self): pixel_values = floats_tensor([self.batch_size, self.num_channels, self.image_size[0], self.image_size[1]]) labels = None if self.use_labels: labels = ids_tensor([self.batch_size], self.type_sequence_label_size) config = self.get_config() return config, pixel_values, labels def get_config(self): return HieraConfig( embed_dim=self.embed_dim, image_size=self.image_size, patch_stride=self.patch_stride, patch_size=self.patch_size, patch_padding=self.patch_padding, masked_unit_size=self.masked_unit_size, mlp_ratio=self.mlp_ratio, num_channels=self.num_channels, depths=self.depths, num_heads=self.num_heads, embed_dim_multiplier=self.embed_dim_multiplier, hidden_act=self.hidden_act, decoder_hidden_size=self.decoder_hidden_size, decoder_depth=self.decoder_depth, decoder_num_heads=self.decoder_num_heads, initializer_range=self.initializer_range, ) def create_and_check_model(self, config, pixel_values, labels): model = HieraModel(config=config) model.to(torch_device) model.eval() result = model(pixel_values) tokens_spatial_shape = [i // s for i, s in zip(self.image_size, config.patch_stride)] expected_seq_len = math.prod(tokens_spatial_shape) // math.prod(config.query_stride) ** (config.num_query_pool) expected_dim = int(config.embed_dim * config.embed_dim_multiplier ** (len(config.depths) - 1)) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, expected_seq_len, expected_dim)) def create_and_check_backbone(self, config, pixel_values, labels): model = HieraBackbone(config=config) model.to(torch_device) model.eval() result = model(pixel_values) # verify hidden states self.parent.assertEqual(len(result.feature_maps), len(config.out_features)) num_patches = config.image_size[0] // config.patch_stride[0] // config.masked_unit_size[0] self.parent.assertListEqual( list(result.feature_maps[0].shape), [self.batch_size, model.channels[0], num_patches, num_patches] ) # verify channels self.parent.assertEqual(len(model.channels), len(config.out_features)) # verify backbone works with out_features=None config.out_features = None model = HieraBackbone(config=config) model.to(torch_device) model.eval() result = model(pixel_values) # verify feature maps self.parent.assertEqual(len(result.feature_maps), 1) self.parent.assertListEqual( list(result.feature_maps[0].shape), [self.batch_size, model.channels[-1], num_patches, num_patches] ) # verify channels self.parent.assertEqual(len(model.channels), 1) def create_and_check_for_pretraining(self, config, pixel_values, labels): model = HieraForPreTraining(config=config) model.to(torch_device) model.eval() result = model(pixel_values) pred_stride = config.patch_stride[-1] * (config.query_stride[-1] ** config.num_query_pool) num_patches = self.image_size[0] // pred_stride self.parent.assertEqual( result.logits.shape, (self.batch_size, num_patches**2, self.num_channels * pred_stride**2) ) # test greyscale images config.num_channels = 1 model = HieraForPreTraining(config) model.to(torch_device) model.eval() pixel_values = floats_tensor([self.batch_size, 1, self.image_size[0], self.image_size[0]]) result = model(pixel_values) self.parent.assertEqual(result.logits.shape, (self.batch_size, num_patches**2, pred_stride**2)) def create_and_check_for_image_classification(self, config, pixel_values, labels): config.num_labels = self.type_sequence_label_size model = HieraForImageClassification(config) model.to(torch_device) model.eval() result = model(pixel_values, labels=labels) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.type_sequence_label_size)) # test greyscale images config.num_channels = 1 model = HieraForImageClassification(config) model.to(torch_device) model.eval() pixel_values = floats_tensor([self.batch_size, 1, self.image_size[0], self.image_size[0]]) result = model(pixel_values) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.type_sequence_label_size)) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, pixel_values, labels, ) = config_and_inputs inputs_dict = {"pixel_values": pixel_values} return config, inputs_dict @require_torch class HieraModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): """ Here we also overwrite some of the tests of test_modeling_common.py, as Hiera does not use input_ids, inputs_embeds, attention_mask and seq_length. """ all_model_classes = ( ( HieraModel, HieraBackbone, HieraForImageClassification, HieraForPreTraining, ) if is_torch_available() else () ) pipeline_model_mapping = ( {"image-feature-extraction": HieraModel, "image-classification": HieraForImageClassification} if is_torch_available() else {} ) fx_compatible = True test_pruning = False test_resize_embeddings = False test_head_masking = False test_torch_exportable = True def setUp(self): self.model_tester = HieraModelTester(self) self.config_tester = ConfigTester(self, config_class=HieraConfig, has_text_modality=False) def test_config(self): self.config_tester.create_and_test_config_to_json_string() self.config_tester.create_and_test_config_to_json_file() self.config_tester.create_and_test_config_from_and_save_pretrained() self.config_tester.create_and_test_config_with_num_labels() self.config_tester.check_config_can_be_init_without_params() self.config_tester.check_config_arguments_init() def test_batching_equivalence(self, atol=3e-4, rtol=3e-4): super().test_batching_equivalence(atol=atol, rtol=rtol) # Overriding as Hiera `get_input_embeddings` returns HieraPatchEmbeddings def test_model_get_set_embeddings(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) self.assertIsInstance(model.get_input_embeddings(), (nn.Module)) x = model.get_output_embeddings() self.assertTrue(x is None or isinstance(x, nn.Linear)) # Overriding as attention shape depends on patch_stride and mask_unit_size def test_attention_outputs(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True for model_class in self.all_model_classes: inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = False config.return_dict = True model = model_class._from_config(config, attn_implementation="eager") config = model.config model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.attentions expected_num_attentions = len(self.model_tester.depths) self.assertEqual(len(attentions), expected_num_attentions) # check that output_attentions also work using config del inputs_dict["output_attentions"] config.output_attentions = True seq_len = math.prod([i // s for i, s in zip(config.image_size, config.patch_stride)]) mask_unit_area = math.prod(config.masked_unit_size) num_windows = seq_len // mask_unit_area if model_class.__name__ == "HieraForPreTraining": num_windows = int(num_windows * (1 - config.mask_ratio)) seq_len = int(num_windows * mask_unit_area) model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.attentions self.assertEqual(len(attentions), expected_num_attentions) self.assertListEqual( list(attentions[0].shape[-4:]), [self.model_tester.num_heads[0], num_windows, mask_unit_area, seq_len // num_windows], ) out_len = len(outputs) # Check attention is always last and order is fine inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) # also another +1 for reshaped_hidden_states added_hidden_states = 1 if model_class.__name__ == "HieraBackbone" else 2 self.assertEqual(out_len + added_hidden_states, len(outputs)) self_attentions = outputs.attentions self.assertEqual(len(self_attentions), expected_num_attentions) self.assertListEqual( list(self_attentions[0].shape[-4:]), [self.model_tester.num_heads[0], num_windows, mask_unit_area, seq_len // num_windows], ) # Overriding as attention shape depends on patch_stride and mask_unit_size def test_hidden_states_output(self): def check_hidden_states_output(inputs_dict, config, model_class, image_size): model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) hidden_states = outputs.hidden_states expected_num_layers = getattr( self.model_tester, "expected_num_hidden_layers", len(self.model_tester.depths) + 1 ) self.assertEqual(len(hidden_states), expected_num_layers) # Hiera has a different seq_length patch_size = config.patch_stride num_patches = (image_size[1] // patch_size[1]) * (image_size[0] // patch_size[0]) if model_class.__name__ == "HieraForPreTraining": mask_unit_area = math.prod(config.masked_unit_size) num_windows = num_patches // mask_unit_area num_windows = int(num_windows * (1 - config.mask_ratio)) num_patches = int(num_windows * mask_unit_area) self.assertListEqual( list(hidden_states[0].shape[-2:]), [num_patches, self.model_tester.embed_dim], ) if model_class.__name__ != "HieraBackbone": reshaped_hidden_states = outputs.reshaped_hidden_states self.assertEqual(len(reshaped_hidden_states), expected_num_layers) batch_size = reshaped_hidden_states[0].shape[0] num_channels = reshaped_hidden_states[0].shape[-1] reshaped_hidden_states = reshaped_hidden_states[0].view(batch_size, -1, num_channels) self.assertListEqual( list(reshaped_hidden_states.shape[-2:]), [num_patches, self.model_tester.embed_dim], ) config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() image_size = self.model_tester.image_size for model_class in self.all_model_classes: inputs_dict["output_hidden_states"] = True check_hidden_states_output(inputs_dict, config, model_class, image_size) # check that output_hidden_states also work using config del inputs_dict["output_hidden_states"] config.output_hidden_states = True check_hidden_states_output(inputs_dict, config, model_class, image_size) # Overriding since HieraForPreTraining outputs bool_masked_pos which has to be converted to float in the msg def test_model_outputs_equivalence(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() def set_nan_tensor_to_zero(t): t[t != t] = 0 return t def check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs={}): with torch.no_grad(): tuple_output = model(**tuple_inputs, return_dict=False, **additional_kwargs) dict_output = model(**dict_inputs, return_dict=True, **additional_kwargs).to_tuple() def recursive_check(tuple_object, dict_object): if isinstance(tuple_object, (list, tuple)): for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object): recursive_check(tuple_iterable_value, dict_iterable_value) elif isinstance(tuple_object, dict): for tuple_iterable_value, dict_iterable_value in zip( tuple_object.values(), dict_object.values() ): recursive_check(tuple_iterable_value, dict_iterable_value) elif tuple_object is None: return else: self.assertTrue( torch.allclose( set_nan_tensor_to_zero(tuple_object), set_nan_tensor_to_zero(dict_object), atol=1e-5 ), msg=( "Tuple and dict output are not equal. Difference:" f" {torch.max(torch.abs(tuple_object.float() - dict_object.float()))}. Tuple has `nan`:" f" {torch.isnan(tuple_object).any()} and `inf`: {torch.isinf(tuple_object)}. Dict has" f" `nan`: {torch.isnan(dict_object).any()} and `inf`: {torch.isinf(dict_object)}." ), ) recursive_check(tuple_output, dict_output) for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() additional_kwargs = {} tuple_inputs = self._prepare_for_class(inputs_dict, model_class) dict_inputs = self._prepare_for_class(inputs_dict, model_class) check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs) tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs) tuple_inputs = self._prepare_for_class(inputs_dict, model_class) dict_inputs = self._prepare_for_class(inputs_dict, model_class) additional_kwargs["output_hidden_states"] = True check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs) tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs) if self.has_attentions: # Removing "output_hidden_states" del additional_kwargs["output_hidden_states"] tuple_inputs = self._prepare_for_class(inputs_dict, model_class) dict_inputs = self._prepare_for_class(inputs_dict, model_class) additional_kwargs["output_attentions"] = True check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs) tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs) tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) additional_kwargs["output_hidden_states"] = True check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs) @unittest.skip(reason="Hiera Transformer does not use feedforward chunking") def test_feed_forward_chunking(self): pass @unittest.skip(reason="Hiera does not use inputs_embeds") def test_inputs_embeds(self): pass def test_model_common_attributes(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) self.assertIsInstance(model.get_input_embeddings(), (nn.Module)) x = model.get_output_embeddings() self.assertTrue(x is None or isinstance(x, nn.Linear)) def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_backbone(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_backbone(*config_and_inputs) def test_for_pretraining(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_pretraining(*config_and_inputs) def test_for_image_classification(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_image_classification(*config_and_inputs) @slow def test_model_from_pretrained(self): for model_name in ["facebook/hiera-tiny-224-hf"]: model = HieraModel.from_pretrained(model_name) self.assertIsNotNone(model) # We will verify our results on an image of cute cats def prepare_img(): image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png") return image @require_torch @require_vision class HieraModelIntegrationTest(unittest.TestCase): @cached_property def default_image_processor(self): return AutoImageProcessor.from_pretrained("facebook/hiera-tiny-224-in1k-hf") if is_vision_available() else None @slow def test_inference_image_classification_head(self): model = HieraForImageClassification.from_pretrained("facebook/hiera-tiny-224-in1k-hf").to(torch_device) image_processor = self.default_image_processor image = prepare_img() inputs = image_processor(images=image, return_tensors="pt").to(torch_device) expected_pixel_values = torch.tensor( [ [[0.2967, 0.4679, 0.4508], [0.3309, 0.4337, 0.3309], [0.3309, 0.3823, 0.3309]], [[-1.5455, -1.4930, -1.5455], [-1.5280, -1.4755, -1.5980], [-1.5630, -1.5280, -1.4755]], [[-0.6367, -0.4973, -0.5321], [-0.7936, -0.6715, -0.6715], [-0.8284, -0.7413, -0.5670]], ] ).to(torch_device) torch.testing.assert_close(inputs.pixel_values[0, :3, :3, :3], expected_pixel_values, rtol=1e-4, atol=1e-4) # forward pass with torch.no_grad(): outputs = model(**inputs) # verify the logits expected_shape = torch.Size((1, 1000)) self.assertEqual(outputs.logits.shape, expected_shape) expected_slice = torch.tensor([[0.8028, 0.2409, -0.2254, -0.3712, -0.2848]]).to(torch_device) torch.testing.assert_close(outputs.logits[0, :5], expected_slice, rtol=1e-4, atol=1e-4) def test_inference_interpolate_pos_encoding(self): model = HieraModel.from_pretrained("facebook/hiera-tiny-224-hf").to(torch_device) image_processor = AutoImageProcessor.from_pretrained( "facebook/hiera-tiny-224-hf", size={"shortest_edge": 448}, crop_size={"height": 448, "width": 448} ) image = prepare_img() inputs = image_processor(images=image, return_tensors="pt") pixel_values = inputs.pixel_values.to(torch_device) # forward pass with torch.no_grad(): outputs = model(pixel_values, interpolate_pos_encoding=True) # verify the logits expected_shape = torch.Size((1, 196, 768)) self.assertEqual(outputs.last_hidden_state.shape, expected_shape) expected_slice = torch.tensor( [[1.7853, 0.0690, 0.3177], [2.6853, -0.2334, 0.0889], [1.5445, -0.1515, -0.0300]] ).to(torch_device) torch.testing.assert_close(outputs.last_hidden_state[0, :3, :3], expected_slice, rtol=1e-4, atol=1e-4) @slow def test_inference_for_pretraining(self): # make random mask reproducible torch.manual_seed(2) model = HieraForPreTraining.from_pretrained("facebook/hiera-tiny-224-mae-hf").to(torch_device) image_processor = self.default_image_processor image = prepare_img() inputs = image_processor(images=image, return_tensors="pt").to(torch_device) config = model.config mask_spatial_shape = [ i // s // ms for i, s, ms in zip(config.image_size, config.patch_stride, config.masked_unit_size) ] num_windows = math.prod(mask_spatial_shape) noise = torch.rand(1, num_windows).to(torch_device) # forward pass with torch.no_grad(): outputs = model(**inputs, noise=noise) # verify the logits expected_shape = torch.Size((1, 196, 768)) self.assertEqual(outputs.logits.shape, expected_shape) expected_slice = torch.tensor( [ [1.6407, 1.6506, 1.6541, 1.6617, 1.6703], [1.9730, 1.9842, 1.9848, 1.9896, 1.9947], [1.5949, 1.8262, 1.2602, 1.4801, 1.4448], [1.2341, 1.7907, 0.8618, 1.5202, 1.4523], [2.0140, 1.9846, 1.9434, 1.9019, 1.8648], ] ) torch.testing.assert_close(outputs.logits[0, :5, :5], expected_slice.to(torch_device), rtol=1e-4, atol=1e-4) @require_torch class HieraBackboneTest(unittest.TestCase, BackboneTesterMixin): all_model_classes = (HieraBackbone,) if is_torch_available() else () config_class = HieraConfig def setUp(self): self.model_tester = HieraModelTester(self)
transformers/tests/models/hiera/test_modeling_hiera.py/0
{ "file_path": "transformers/tests/models/hiera/test_modeling_hiera.py", "repo_id": "transformers", "token_count": 12143 }
586
# Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import shutil import tempfile import unittest from io import BytesIO import requests from transformers import Idefics2Processor from transformers.testing_utils import require_torch, require_vision from transformers.utils import is_vision_available from ...test_processing_common import ProcessorTesterMixin if is_vision_available(): from PIL import Image from transformers import ( AutoProcessor, Idefics2Processor, ) @require_torch @require_vision class Idefics2ProcessorTest(ProcessorTesterMixin, unittest.TestCase): processor_class = Idefics2Processor @classmethod def setUpClass(cls): cls.tmpdirname = tempfile.mkdtemp() processor = Idefics2Processor.from_pretrained("HuggingFaceM4/idefics2-8b", image_seq_len=2) processor.save_pretrained(cls.tmpdirname) cls.image1 = Image.open( BytesIO( requests.get( "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" ).content ) ) cls.image2 = Image.open( BytesIO(requests.get("https://cdn.britannica.com/59/94459-050-DBA42467/Skyline-Chicago.jpg").content) ) cls.image3 = Image.open( BytesIO( requests.get( "https://thumbs.dreamstime.com/b/golden-gate-bridge-san-francisco-purple-flowers-california-echium-candicans-36805947.jpg" ).content ) ) cls.bos_token = processor.tokenizer.bos_token cls.image_token = processor.image_token cls.fake_image_token = processor.fake_image_token cls.bos_token_id = processor.tokenizer.convert_tokens_to_ids(cls.bos_token) cls.image_token_id = processor.tokenizer.convert_tokens_to_ids(cls.image_token) cls.fake_image_token_id = processor.tokenizer.convert_tokens_to_ids(cls.fake_image_token) cls.image_seq_len = processor.image_seq_len def get_tokenizer(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).tokenizer def get_image_processor(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).image_processor def get_processor(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs) @staticmethod def prepare_processor_dict(): return {"image_seq_len": 2} @classmethod def tearDownClass(cls): shutil.rmtree(cls.tmpdirname, ignore_errors=True) def test_process_interleaved_images_prompts_no_image_splitting(self): tokenizer = self.get_tokenizer() processor = self.get_processor() processor.image_processor.do_image_splitting = False # Test that a single image is processed correctly inputs = processor(images=self.image1) self.assertEqual(inputs["pixel_values"].shape, (1, 1, 3, 653, 980)) self.assertEqual(inputs["pixel_attention_mask"].shape, (1, 1, 653, 980)) # fmt: on # Test a single sample with image and text image_str = "<image>" text_str = "In this image, we see" text = image_str + text_str inputs = processor(text=text, images=self.image1) # fmt: off tokenized_sentence = tokenizer(text_str, add_special_tokens=False) expected_input_ids = [[self.bos_token_id] + [self.fake_image_token_id] + [self.image_token_id] * self.image_seq_len + [self.fake_image_token_id] + tokenized_sentence["input_ids"]] self.assertEqual(inputs["input_ids"], expected_input_ids) self.assertEqual(inputs["attention_mask"], [[1] * len(expected_input_ids[0])]) self.assertEqual(inputs["pixel_values"].shape, (1, 1, 3, 653, 980)) self.assertEqual(inputs["pixel_attention_mask"].shape, (1, 1, 653, 980)) # fmt: on # Test that batch is correctly processed image_str = "<image>" text_str_1 = "In this image, we see" text_str_2 = "bla, bla" text = [ image_str + text_str_1, text_str_2 + image_str + image_str, ] images = [[self.image1], [self.image2, self.image3]] inputs = processor(text=text, images=images, padding=True) # fmt: off tokenized_sentence_1 = tokenizer(text_str_1, add_special_tokens=False) tokenized_sentence_2 = tokenizer(text_str_2, add_special_tokens=False) expected_input_ids_1 = [self.bos_token_id] + [self.fake_image_token_id] + [self.image_token_id] * self.image_seq_len + [self.fake_image_token_id] + tokenized_sentence_1["input_ids"] expected_input_ids_2 = [self.bos_token_id] + tokenized_sentence_2["input_ids"] + [self.fake_image_token_id] + [self.image_token_id] * self.image_seq_len + [self.fake_image_token_id] + [self.image_token_id] * self.image_seq_len + [self.fake_image_token_id] # Pad the first input to match the second input pad_len = len(expected_input_ids_2) - len(expected_input_ids_1) padded_expected_input_ids_1 = [0] * pad_len + expected_input_ids_1 self.assertEqual( inputs["input_ids"], [padded_expected_input_ids_1, expected_input_ids_2] ) self.assertEqual( inputs["attention_mask"], [[0] * pad_len + [1] * len(expected_input_ids_1), [1] * len(expected_input_ids_2)] ) self.assertEqual(inputs['pixel_values'].shape, (2, 2, 3, 767, 980)) self.assertEqual(inputs['pixel_attention_mask'].shape, (2, 2, 767, 980)) # fmt: on def test_process_interleaved_images_prompts_image_splitting(self): processor = self.get_processor() tokenizer = self.get_tokenizer() processor.image_processor.do_image_splitting = True # Test that a single image is processed correctly inputs = processor(images=self.image1) self.assertEqual(inputs["pixel_values"].shape, (1, 5, 3, 653, 980)) self.assertEqual(inputs["pixel_attention_mask"].shape, (1, 5, 653, 980)) # fmt: on # Test a single sample with image and text image_str = "<image>" text_str = "In this image, we see" text = image_str + text_str inputs = processor(text=text, images=self.image1) # fmt: off tokenized_sentence = tokenizer(text_str, add_special_tokens=False) expected_input_ids = [[self.bos_token_id] + ([self.fake_image_token_id] + [self.image_token_id] * self.image_seq_len) * 5 + [self.fake_image_token_id] + tokenized_sentence["input_ids"]] self.assertEqual(inputs["input_ids"], expected_input_ids) self.assertEqual(inputs["attention_mask"], [[1] * len(expected_input_ids[0])]) self.assertEqual(inputs["pixel_values"].shape, (1, 5, 3, 653, 980)) self.assertEqual(inputs["pixel_attention_mask"].shape, (1, 5, 653, 980)) # fmt: on # Test that batch is correctly processed image_str = "<image>" text_str_1 = "In this image, we see" text_str_2 = "bla, bla" text = [ image_str + text_str_1, text_str_2 + image_str + image_str, ] images = [[self.image1], [self.image2, self.image3]] inputs = processor(text=text, images=images, padding=True) # fmt: off tokenized_sentence_1 = tokenizer(text_str_1, add_special_tokens=False) tokenized_sentence_2 = tokenizer(text_str_2, add_special_tokens=False) expected_input_ids_1 = [self.bos_token_id] + ([self.fake_image_token_id] + [self.image_token_id] * self.image_seq_len) * 5 + [self.fake_image_token_id] + tokenized_sentence_1["input_ids"] expected_input_ids_2 = [self.bos_token_id] + tokenized_sentence_2["input_ids"] + ([self.fake_image_token_id] + [self.image_token_id] * self.image_seq_len) * 5 + ([self.fake_image_token_id] + [self.image_token_id] * self.image_seq_len) * 5 + [self.fake_image_token_id] # Pad the first input to match the second input pad_len = len(expected_input_ids_2) - len(expected_input_ids_1) padded_expected_input_ids_1 = [0] * pad_len + expected_input_ids_1 self.assertEqual( inputs["input_ids"], [padded_expected_input_ids_1, expected_input_ids_2] ) self.assertEqual( inputs["attention_mask"], [[0] * pad_len + [1] * len(expected_input_ids_1), [1] * len(expected_input_ids_2)] ) self.assertEqual(inputs['pixel_values'].shape, (2, 10, 3, 767, 980)) self.assertEqual(inputs['pixel_attention_mask'].shape, (2, 10, 767, 980)) # fmt: on def test_add_special_tokens_processor(self): processor = self.get_processor() tokenizer = self.get_tokenizer() image_str = "<image>" text_str = "In this image, we see" text = text_str + image_str n_image_repeat = 5 if processor.image_processor.do_image_splitting else 1 # fmt: off inputs = processor(text=text, images=self.image1, add_special_tokens=False) tokenized_sentence = tokenizer(text_str, add_special_tokens=False) expected_input_ids = [tokenized_sentence["input_ids"] + ([self.fake_image_token_id] + [self.image_token_id] * self.image_seq_len) * n_image_repeat + [self.fake_image_token_id]] self.assertEqual(inputs["input_ids"], expected_input_ids) inputs = processor(text=text, images=self.image1) expected_input_ids = [[self.bos_token_id] + tokenized_sentence["input_ids"] + ([self.fake_image_token_id] + [self.image_token_id] * self.image_seq_len) * n_image_repeat + [self.fake_image_token_id]] self.assertEqual(inputs["input_ids"], expected_input_ids) # fmt: on def test_non_nested_images_with_batched_text(self): processor = self.get_processor() processor.image_processor.do_image_splitting = False image_str = "<image>" text_str_1 = "In this image, we see" text_str_2 = "bla, bla" text = [ image_str + text_str_1, text_str_2 + image_str + image_str, ] images = [self.image1, self.image2, self.image3] inputs = processor(text=text, images=images, padding=True) self.assertEqual(inputs["pixel_values"].shape, (2, 2, 3, 767, 980)) self.assertEqual(inputs["pixel_attention_mask"].shape, (2, 2, 767, 980)) def test_process_interleaved_images_prompts_image_error(self): processor = self.get_processor() text = [ "This is a test sentence.", "In this other sentence we try some good things", ] images = [[self.image1], [self.image2]] with self.assertRaises(ValueError): processor(text=text, images=images, padding=True) images = [[self.image1], []] with self.assertRaises(ValueError): processor(text=text, images=images, padding=True) text = [ "This is a test sentence.<image>", "In this other sentence we try some good things<image>", ] images = [[self.image1], [self.image2, self.image3]] with self.assertRaises(ValueError): processor(text=text, images=images, padding=True) images = [[], [self.image2]] with self.assertRaises(ValueError): processor(text=text, images=images, padding=True) images = [self.image1, self.image2, self.image3] with self.assertRaises(ValueError): processor(text=text, images=images, padding=True) images = [self.image1] with self.assertRaises(ValueError): processor(text=text, images=images, padding=True) text = [ "This is a test sentence.", "In this other sentence we try some good things<image>", ] images = [[self.image1], []] with self.assertRaises(ValueError): processor(text=text, images=images, padding=True) images = [[], [self.image2]] with self.assertRaises(ValueError): processor(text=text, images=images, padding=True) images = [self.image1, self.image2] with self.assertRaises(ValueError): processor(text=text, images=images, padding=True) images = [self.image1] with self.assertRaises(ValueError): processor(text=text, images=images, padding=True) def test_apply_chat_template(self): # Message contains content which a mix of lists with images and image urls and string messages = [ { "role": "user", "content": [ {"type": "text", "text": "What do these images show?"}, {"type": "image"}, {"type": "image"}, "What do these images show?", ], }, { "role": "assistant", "content": [ { "type": "text", "text": "The first image shows the statue of Liberty in New York. The second image picture depicts Idefix, the dog of Obelix in Asterix and Obelix.", } ], }, {"role": "user", "content": [{"type": "text", "text": "And who is that?"}]}, ] processor = self.get_processor() # Make short sequence length to test that the fake tokens are added correctly rendered = processor.apply_chat_template(messages, add_generation_prompt=True) expected_rendered = ( "User: What do these images show?<image><image><end_of_utterance>\n" "Assistant: The first image shows the statue of Liberty in New York. The second image picture depicts Idefix, the dog of Obelix in Asterix and Obelix.<end_of_utterance>\n" "User: And who is that?<end_of_utterance>\n" "Assistant:" ) self.assertEqual(rendered, expected_rendered)
transformers/tests/models/idefics2/test_processing_idefics2.py/0
{ "file_path": "transformers/tests/models/idefics2/test_processing_idefics2.py", "repo_id": "transformers", "token_count": 6411 }
587
# Copyright 2024 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch InstructBlipVideo model.""" import inspect import tempfile import unittest import numpy as np import pytest from huggingface_hub import hf_hub_download from transformers import ( CONFIG_MAPPING, InstructBlipVideoConfig, InstructBlipVideoProcessor, InstructBlipVideoQFormerConfig, InstructBlipVideoVisionConfig, ) from transformers.testing_utils import ( require_accelerate, require_bitsandbytes, require_torch, require_vision, slow, torch_device, ) from transformers.utils import is_torch_available from ...generation.test_utils import GenerationTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ( ModelTesterMixin, floats_tensor, ids_tensor, random_attention_mask, ) if is_torch_available(): import torch from torch import nn from transformers import ( InstructBlipVideoForConditionalGeneration, InstructBlipVideoModel, InstructBlipVideoVisionModel, ) class InstructBlipVideoVisionModelTester: def __init__( self, parent, batch_size=12, image_size=30, frames=4, patch_size=2, num_channels=3, is_training=True, hidden_size=32, projection_dim=32, num_hidden_layers=2, num_attention_heads=4, intermediate_size=37, dropout=0.1, attention_dropout=0.1, initializer_range=1e-10, scope=None, ): self.parent = parent self.batch_size = batch_size self.image_size = image_size self.frames = frames self.patch_size = patch_size self.num_channels = num_channels self.is_training = is_training self.hidden_size = hidden_size self.projection_dim = projection_dim self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.dropout = dropout self.attention_dropout = attention_dropout self.initializer_range = initializer_range self.scope = scope # in case of a vision transformer, the seq length equals the number of patches + 1 (we add 1 for the [CLS] token) num_patches = (image_size // patch_size) ** 2 self.seq_length = num_patches + 1 def prepare_config_and_inputs(self): pixel_values = floats_tensor( [self.batch_size * self.frames, self.num_channels, self.image_size, self.image_size] ) config = self.get_config() return config, pixel_values def get_config(self): return InstructBlipVideoVisionConfig( image_size=self.image_size, patch_size=self.patch_size, num_channels=self.num_channels, hidden_size=self.hidden_size, projection_dim=self.projection_dim, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, intermediate_size=self.intermediate_size, dropout=self.dropout, attention_dropout=self.attention_dropout, initializer_range=self.initializer_range, ) def create_and_check_model(self, config, pixel_values): model = InstructBlipVideoVisionModel(config=config) model.to(torch_device) model.eval() with torch.no_grad(): result = model(pixel_values) # expected sequence length = num_patches + 1 (we add 1 for the [CLS] token) image_size = (self.image_size, self.image_size) patch_size = (self.patch_size, self.patch_size) num_patches = (image_size[1] // patch_size[1]) * (image_size[0] // patch_size[0]) self.parent.assertEqual( result.last_hidden_state.shape, (self.batch_size * self.frames, num_patches + 1, self.hidden_size) ) self.parent.assertEqual(result.pooler_output.shape, (self.batch_size * self.frames, self.hidden_size)) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, pixel_values = config_and_inputs inputs_dict = {"pixel_values": pixel_values} return config, inputs_dict @require_torch class InstructBlipVideoVisionModelTest(ModelTesterMixin, unittest.TestCase): """ Here we also overwrite some of the tests of test_modeling_common.py, as InstructBlipVideo's vision encoder does not use input_ids, inputs_embeds, attention_mask and seq_length. """ all_model_classes = (InstructBlipVideoVisionModel,) if is_torch_available() else () fx_compatible = False test_pruning = False test_resize_embeddings = False test_head_masking = False def setUp(self): self.model_tester = InstructBlipVideoVisionModelTester(self) common_properties = ["num_query_tokens", "video_token_index"] self.config_tester = ConfigTester( self, config_class=InstructBlipVideoConfig, has_text_modality=False, common_properties=common_properties ) def test_config(self): self.config_tester.run_common_tests() @unittest.skip(reason="InstructBlipVideo's vision encoder does not use inputs_embeds") def test_inputs_embeds(self): pass @unittest.skip(reason="InstructBlipVideo's vision encoder is an nn.Embeddings layer") def test_model_get_set_embeddings(self): pass def test_model_common_attributes(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) self.assertIsInstance(model.get_input_embeddings(), (nn.Module)) x = model.get_output_embeddings() self.assertTrue(x is None or isinstance(x, nn.Linear)) def test_forward_signature(self): for model_class in self.all_model_classes: config, _ = self.model_tester.prepare_config_and_inputs_for_common() model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] expected_arg_names = ["pixel_values"] self.assertListEqual(arg_names[:1], expected_arg_names) def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) @unittest.skip( reason="InstructBlipVideoVisionModel is an internal building block, doesn't support standalone training" ) def test_training(self): pass @unittest.skip( reason="InstructBlipVideoVisionModel is an internal building block, doesn't support standalone training" ) def test_training_gradient_checkpointing(self): pass @unittest.skip( reason="This architecture seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant(self): pass @unittest.skip( reason="This architecture seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant_false(self): pass @slow def test_model_from_pretrained(self): model_name = "Salesforce/instructblip-vicuna-7b" model = InstructBlipVideoVisionModel.from_pretrained(model_name) self.assertIsNotNone(model) class InstructBlipVideoQFormerModelTester: def __init__( self, parent, batch_size=12, seq_length=7, is_training=True, use_input_mask=True, use_labels=True, vocab_size=99, hidden_size=32, projection_dim=32, num_hidden_layers=2, num_attention_heads=4, intermediate_size=37, dropout=0.1, attention_dropout=0.1, max_position_embeddings=512, initializer_range=0.02, bos_token_id=0, scope=None, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_input_mask = use_input_mask self.use_labels = use_labels self.vocab_size = vocab_size self.hidden_size = hidden_size self.projection_dim = projection_dim self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.dropout = dropout self.attention_dropout = attention_dropout self.max_position_embeddings = max_position_embeddings self.initializer_range = initializer_range self.scope = scope self.bos_token_id = bos_token_id def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) qformer_input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) input_mask = None if self.use_input_mask: input_mask = random_attention_mask([self.batch_size, self.seq_length]) qformer_attention_mask = ids_tensor([self.batch_size, self.seq_length], vocab_size=2) if input_mask is not None: batch_size, seq_length = input_mask.shape rnd_start_indices = np.random.randint(1, seq_length - 1, size=(batch_size,)) for batch_idx, start_index in enumerate(rnd_start_indices): input_mask[batch_idx, :start_index] = 1 input_mask[batch_idx, start_index:] = 0 config = self.get_config() return config, input_ids, input_mask, qformer_input_ids, qformer_attention_mask def get_config(self): return InstructBlipVideoQFormerConfig( vocab_size=self.vocab_size, hidden_size=self.hidden_size, projection_dim=self.projection_dim, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, intermediate_size=self.intermediate_size, dropout=self.dropout, attention_dropout=self.attention_dropout, max_position_embeddings=self.max_position_embeddings, initializer_range=self.initializer_range, bos_token_id=self.bos_token_id, ) # this class is based on `OPTModelTester` found in tests/models/opt/test_modeling_opt.py class InstructBlipVideoTextModelDecoderOnlyTester: def __init__( self, parent, batch_size=12, seq_length=7, is_training=True, use_labels=False, vocab_size=99, hidden_size=16, num_hidden_layers=2, num_attention_heads=4, intermediate_size=4, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=100, eos_token_id=2, pad_token_id=1, bos_token_id=0, embed_dim=16, num_labels=3, word_embed_proj_dim=16, type_sequence_label_size=2, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_labels = use_labels self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.max_position_embeddings = max_position_embeddings self.eos_token_id = eos_token_id self.pad_token_id = pad_token_id self.bos_token_id = bos_token_id self.embed_dim = embed_dim self.num_labels = num_labels self.type_sequence_label_size = type_sequence_label_size self.word_embed_proj_dim = word_embed_proj_dim self.is_encoder_decoder = False def prepare_config_and_inputs(self): config = self.get_config() input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size).clamp(3) input_ids[:, -1] = self.eos_token_id # Eos Token attention_mask = input_ids.ne(self.pad_token_id) return config, input_ids, attention_mask def get_config(self): return CONFIG_MAPPING["opt"]( vocab_size=self.vocab_size, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, ffn_dim=self.intermediate_size, dropout=self.hidden_dropout_prob, attention_dropout=self.attention_probs_dropout_prob, max_position_embeddings=self.max_position_embeddings, eos_token_id=self.eos_token_id, bos_token_id=self.bos_token_id, pad_token_id=self.pad_token_id, embed_dim=self.embed_dim, is_encoder_decoder=False, word_embed_proj_dim=self.word_embed_proj_dim, ) # this model tester uses a decoder-only language model (OPT) class InstructBlipVideoForConditionalGenerationDecoderOnlyModelTester: def __init__( self, parent, vision_kwargs=None, qformer_kwargs=None, text_kwargs=None, is_training=True, num_query_tokens=10, video_token_index=4, ): if vision_kwargs is None: vision_kwargs = {} if qformer_kwargs is None: qformer_kwargs = {} if text_kwargs is None: text_kwargs = {} self.parent = parent self.vision_model_tester = InstructBlipVideoVisionModelTester(parent, **vision_kwargs) self.qformer_model_tester = InstructBlipVideoQFormerModelTester(parent, **qformer_kwargs) self.text_model_tester = InstructBlipVideoTextModelDecoderOnlyTester(parent, **text_kwargs) self.batch_size = self.text_model_tester.batch_size # need bs for batching_equivalence test self.frames = self.vision_model_tester.frames # need seq_length for common tests self.seq_length = self.text_model_tester.seq_length + (num_query_tokens * self.frames) self.is_training = is_training self.num_query_tokens = num_query_tokens self.video_token_index = video_token_index def prepare_config_and_inputs(self): _, pixel_values = self.vision_model_tester.prepare_config_and_inputs() _, _, _, qformer_input_ids, qformer_attention_mask = self.qformer_model_tester.prepare_config_and_inputs() _, input_ids, attention_mask = self.text_model_tester.prepare_config_and_inputs() _, c, h, w = pixel_values.shape pixel_values = pixel_values.reshape(-1, self.frames, c, h, w) vision_tokens = ( torch.ones( (input_ids.shape[0], self.num_query_tokens * self.frames), device=torch_device, dtype=input_ids.dtype ) * self.video_token_index ) input_ids[input_ids == self.video_token_index] = self.text_model_tester.pad_token_id input_ids = torch.cat([vision_tokens, input_ids], dim=-1) vision_attention_mask = torch.ones_like(vision_tokens) attention_mask = torch.cat([vision_attention_mask, attention_mask], dim=-1) config = self.get_config() return config, input_ids, attention_mask, qformer_input_ids, qformer_attention_mask, pixel_values def get_config(self): return InstructBlipVideoConfig.from_vision_qformer_text_configs( vision_config=self.vision_model_tester.get_config(), qformer_config=self.qformer_model_tester.get_config(), text_config=self.text_model_tester.get_config(), num_query_tokens=self.num_query_tokens, video_token_index=self.video_token_index, ) def create_and_check_for_conditional_generation( self, config, input_ids, attention_mask, qformer_input_ids, qformer_attention_mask, pixel_values ): model = InstructBlipVideoForConditionalGeneration(config).to(torch_device).eval() with torch.no_grad(): result = model( pixel_values, input_ids=input_ids, attention_mask=attention_mask, qformer_input_ids=qformer_input_ids, qformer_attention_mask=qformer_attention_mask, ) expected_seq_length = ( self.num_query_tokens * self.vision_model_tester.frames ) + self.text_model_tester.seq_length self.parent.assertEqual( result.logits.shape, (self.vision_model_tester.batch_size, expected_seq_length, self.text_model_tester.vocab_size), ) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, input_ids, attention_mask, qformer_input_ids, qformer_attention_mask, pixel_values = config_and_inputs inputs_dict = { "pixel_values": pixel_values, "input_ids": input_ids, "attention_mask": attention_mask, "qformer_input_ids": qformer_input_ids, "qformer_attention_mask": qformer_attention_mask, } return config, inputs_dict @require_torch class InstructBlipVideoForConditionalGenerationDecoderOnlyTest( ModelTesterMixin, GenerationTesterMixin, unittest.TestCase ): all_model_classes = ( (InstructBlipVideoForConditionalGeneration, InstructBlipVideoModel) if is_torch_available() else () ) additional_model_inputs = ["qformer_input_ids", "input_ids"] fx_compatible = False test_head_masking = False test_pruning = False test_resize_embeddings = True test_attention_outputs = False test_torchscript = False _is_composite = True def setUp(self): self.model_tester = InstructBlipVideoForConditionalGenerationDecoderOnlyModelTester(self) common_properties = ["num_query_tokens", "video_token_index"] self.config_tester = ConfigTester( self, config_class=InstructBlipVideoConfig, has_text_modality=False, common_properties=common_properties ) def test_for_conditional_generation(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_conditional_generation(*config_and_inputs) def test_config(self): self.config_tester.run_common_tests() @unittest.skip( reason="InstructBlipVideoQFormerModel does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet." ) def test_eager_matches_sdpa_generate(self): pass @unittest.skip(reason="Hidden_states is tested in individual model tests") def test_hidden_states_output(self): pass @unittest.skip(reason="InstructBlipVideoForConditionalGeneration doesn't support inputs_embeds") def test_inputs_embeds(self): pass @unittest.skip(reason="Tied weights are tested in individual model tests") def test_tied_weights_keys(self): pass @unittest.skip(reason="Retain_grad is tested in individual model tests") def test_retain_grad_hidden_states_attentions(self): pass @unittest.skip(reason="InstructBlipVideoModel does not have input/output embeddings") def test_model_common_attributes(self): pass def test_forward_signature(self): for model_class in self.all_model_classes: config, _ = self.model_tester.prepare_config_and_inputs_for_common() model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] expected_arg_names = ["pixel_values"] self.assertListEqual(arg_names[:1], expected_arg_names) def test_load_vision_qformer_text_config(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() # Save InstructBlipVideoConfig and check if we can load InstructBlipVideoVisionConfig from it with tempfile.TemporaryDirectory() as tmp_dir_name: config.save_pretrained(tmp_dir_name) vision_config = InstructBlipVideoVisionConfig.from_pretrained(tmp_dir_name) self.assertDictEqual(config.vision_config.to_dict(), vision_config.to_dict()) # Save InstructBlipVideoConfig and check if we can load InstructBlipVideoQFormerConfig from it with tempfile.TemporaryDirectory() as tmp_dir_name: config.save_pretrained(tmp_dir_name) qformer_config = InstructBlipVideoQFormerConfig.from_pretrained(tmp_dir_name) self.assertDictEqual(config.qformer_config.to_dict(), qformer_config.to_dict()) @slow def test_model_from_pretrained(self): model_name = "Salesforce/instructblip-vicuna-7b" model = InstructBlipVideoForConditionalGeneration.from_pretrained(model_name) self.assertIsNotNone(model) # overwrite because InstructBLIPVideo internally calls LM.generate() with embeds thus it cannot operate in no cache format def _check_generate_outputs(self, output, config, use_cache=False, num_return_sequences=1, num_beams=1): use_cache = True # force this to be True in case False is passed super()._check_generate_outputs( output, config, use_cache=use_cache, num_return_sequences=num_return_sequences, num_beams=num_beams ) # overwrite because InstructBLIPVideo cannot generate only from input ids, and requires `pixel` values and `qformer_input_ids` in all cases to be present @pytest.mark.generate def test_left_padding_compatibility(self): # NOTE: left-padding results in small numerical differences. This is expected. # See https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535 # First, filter out models that don't support left padding # - The model must have generative capabilities if len(self.all_generative_model_classes) == 0: self.skipTest(reason="No generative architecture available for this model.") # - The model must support padding if not self.has_attentions: self.skipTest(reason="This model doesn't support padding.") # - The model must be a decoder-only architecture (encoder-based architectures use right-padding) decoder_only_classes = [] for model_class in self.all_generative_model_classes: config, _ = self.prepare_config_and_inputs_for_generate() if config.is_encoder_decoder: continue else: decoder_only_classes.append(model_class) if len(decoder_only_classes) == 0: self.skipTest(reason="No decoder-only architecture available for this model.") # - Decoder-only architectures derived from encoder-decoder models could support it in theory, but we haven't # added support for it yet. We skip these models for now. has_encoder_attributes = any( attr_name for attr_name in config.to_dict() if attr_name.startswith("encoder") and attr_name != "encoder_no_repeat_ngram_size" ) if has_encoder_attributes: self.skipTest( reason="The decoder-only derived from encoder-decoder models are not expected to support left-padding." ) # Then, test left-padding def _prepare_model_kwargs(input_ids, attention_mask, signature): model_kwargs = {"input_ids": input_ids, "attention_mask": attention_mask} if "position_ids" in signature: position_ids = torch.cumsum(attention_mask, dim=-1) - 1 position_ids.masked_fill_(attention_mask == 0, 1) model_kwargs["position_ids"] = position_ids if "cache_position" in signature: cache_position = torch.arange(input_ids.shape[-1], device=torch_device) model_kwargs["cache_position"] = cache_position return model_kwargs for model_class in decoder_only_classes: config, inputs_dict = self.prepare_config_and_inputs_for_generate() input_ids = inputs_dict["input_ids"] attention_mask = inputs_dict.get("attention_mask") pixel_values = inputs_dict["pixel_values"] qformer_input_ids = inputs_dict["qformer_input_ids"] if attention_mask is None: attention_mask = torch.ones_like(input_ids) model = model_class(config).to(torch_device).eval() signature = inspect.signature(model.forward).parameters.keys() # no cache as some models require special cache classes to be init outside forward model.generation_config.use_cache = False # Without padding model_kwargs = _prepare_model_kwargs(input_ids, attention_mask, signature) next_logits_wo_padding = model( **model_kwargs, pixel_values=pixel_values, qformer_input_ids=qformer_input_ids ).logits[:, -1, :] # With left-padding (length 32) # can hardcode pad_token to be 0 as we'll do attn masking anyway pad_token_id = ( config.get_text_config().pad_token_id if config.get_text_config().pad_token_id is not None else 0 ) pad_size = (input_ids.shape[0], 32) padding = torch.ones(pad_size, dtype=input_ids.dtype, device=torch_device) * pad_token_id padded_input_ids = torch.cat((padding, input_ids), dim=1) padded_attention_mask = torch.cat((torch.zeros_like(padding), attention_mask), dim=1) model_kwargs = _prepare_model_kwargs(padded_input_ids, padded_attention_mask, signature) next_logits_with_padding = model( **model_kwargs, pixel_values=pixel_values, qformer_input_ids=qformer_input_ids ).logits[:, -1, :] # They should result in very similar logits torch.testing.assert_close(next_logits_wo_padding, next_logits_with_padding, rtol=1e-5, atol=1e-5) def test_sdpa_can_dispatch_composite_models(self): """ Tests if composite models dispatch correctly on SDPA/eager when requested so when loading the model. This tests only by looking at layer names, as usually SDPA layers are calles "SDPAAttention". In contrast to the above test, this one checks if the "config._attn_implamentation" is a dict after the model is loaded, because we manually replicate requested attn implementation on each sub-config when loading. See https://github.com/huggingface/transformers/pull/32238 for more info The test tries to cover most general cases of composite models, VLMs with vision and text configs. Any model that has a different set of sub-configs has to overwrite this test. """ if not self.has_attentions: self.skipTest(reason="Model architecture does not support attentions") if not self._is_composite: self.skipTest(f"{self.all_model_classes[0].__name__} does not support SDPA") for model_class in self.all_model_classes: config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() model = model_class(config) with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) model_sdpa = model_class.from_pretrained(tmpdirname) model_sdpa = model_sdpa.eval().to(torch_device) # `None` as it is the requested one which will be assigned to each sub-config # Sub-model will dispatch to SDPA if it can (checked below that `SDPA` layers are present) self.assertTrue(model.language_model.config._attn_implementation == "sdpa") self.assertTrue(model.vision_model.config._attn_implementation == "sdpa") self.assertTrue(model.qformer.config._attn_implementation == "eager") model_eager = model_class.from_pretrained(tmpdirname, attn_implementation="eager") model_eager = model_eager.eval().to(torch_device) self.assertTrue(model_eager.config._attn_implementation == "eager") self.assertTrue(model_eager.language_model.config._attn_implementation == "eager") self.assertTrue(model_eager.vision_model.config._attn_implementation == "eager") self.assertTrue(model_eager.qformer.config._attn_implementation == "eager") for name, submodule in model_eager.named_modules(): class_name = submodule.__class__.__name__ if ( class_name.endswith("Attention") and getattr(submodule, "config", None) and submodule.config._attn_implementation == "sdpa" ): raise ValueError("The eager model should not have SDPA attention layers") # We will verify our results on an image of cute cats def prepare_video(): video_file = hf_hub_download( repo_id="raushan-testing-hf/videos-test", filename="video_demo.npy", repo_type="dataset" ) video = np.load(video_file)[::2] # sample every 2nd frame to get 4 frames total return video @require_vision @require_torch @require_bitsandbytes @require_accelerate @slow class InstructBlipVideoModelIntegrationTest(unittest.TestCase): def test_inference_vicuna_7b(self): processor = InstructBlipVideoProcessor.from_pretrained("Salesforce/instructblip-vicuna-7b") model = InstructBlipVideoForConditionalGeneration.from_pretrained( "Salesforce/instructblip-vicuna-7b", load_in_8bit=True, ) clip = prepare_video() prompt = "Explain what is happening in this short video." inputs = processor(images=clip, text=prompt, return_tensors="pt").to(torch_device, torch.float16) # verify generation outputs = model.generate(**inputs, max_new_tokens=30) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() self.assertEqual( generated_text, "Explain what is happening in this short video. a baby girl wearing glasses is reading a book on the bed 1080p", )
transformers/tests/models/instructblipvideo/test_modeling_instructblipvideo.py/0
{ "file_path": "transformers/tests/models/instructblipvideo/test_modeling_instructblipvideo.py", "repo_id": "transformers", "token_count": 13503 }
588
# Copyright 2023 Microsoft Research and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch KOSMOS-2 model.""" import copy import inspect import os import tempfile import unittest import numpy as np import pytest import requests from parameterized import parameterized from transformers import AutoModelForImageTextToText, AutoProcessor, Kosmos2Config from transformers.models.kosmos2.configuration_kosmos2 import Kosmos2TextConfig, Kosmos2VisionConfig from transformers.testing_utils import ( IS_ROCM_SYSTEM, IS_XPU_SYSTEM, require_torch, require_vision, slow, torch_device, ) from transformers.utils import ( is_torch_available, is_vision_available, ) from ...generation.test_utils import GenerationTesterMixin, has_similar_generate_outputs from ...test_configuration_common import ConfigTester from ...test_modeling_common import ( TEST_EAGER_MATCHES_SDPA_INFERENCE_PARAMETERIZATION, ModelTesterMixin, _config_zero_init, floats_tensor, ids_tensor, random_attention_mask, ) from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from transformers import Kosmos2ForConditionalGeneration, Kosmos2Model if is_vision_available(): from PIL import Image class Kosmos2VisionModelTester: def __init__( self, parent, batch_size=12, image_size=32, patch_size=4, num_channels=3, is_training=True, hidden_size=32, num_hidden_layers=2, num_attention_heads=4, intermediate_size=37, dropout=0.1, attention_dropout=0.1, initializer_range=1e-10, scope=None, ): self.parent = parent self.batch_size = batch_size self.image_size = image_size self.patch_size = patch_size self.num_channels = num_channels self.is_training = is_training self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.dropout = dropout self.attention_dropout = attention_dropout self.initializer_range = initializer_range self.scope = scope # in ViT, the seq length equals the number of patches + 1 (we add 1 for the [CLS] token) num_patches = (image_size // patch_size) ** 2 self.seq_length = num_patches + 1 def prepare_config_and_inputs(self): pixel_values = floats_tensor([self.batch_size, self.num_channels, self.image_size, self.image_size]) config = self.get_config() return config, pixel_values def get_config(self): return Kosmos2VisionConfig( image_size=self.image_size, patch_size=self.patch_size, num_channels=self.num_channels, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, intermediate_size=self.intermediate_size, dropout=self.dropout, attention_dropout=self.attention_dropout, initializer_range=self.initializer_range, ) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, pixel_values = config_and_inputs inputs_dict = {"pixel_values": pixel_values} return config, inputs_dict class Kosmos2TextModelTester: def __init__( self, parent, batch_size=12, seq_length=7, is_training=True, use_input_mask=True, use_labels=True, vocab_size=99, hidden_size=32, num_hidden_layers=2, num_attention_heads=4, intermediate_size=37, dropout=0.1, attention_dropout=0.1, max_position_embeddings=512, scope=None, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_input_mask = use_input_mask self.use_labels = use_labels self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.dropout = dropout self.attention_dropout = attention_dropout self.max_position_embeddings = max_position_embeddings self.scope = scope def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) input_mask = None if self.use_input_mask: input_mask = random_attention_mask([self.batch_size, self.seq_length]) if input_mask is not None: batch_size, seq_length = input_mask.shape rnd_start_indices = np.random.randint(1, seq_length - 1, size=(batch_size,)) for batch_idx, start_index in enumerate(rnd_start_indices): input_mask[batch_idx, :start_index] = 1 input_mask[batch_idx, start_index:] = 0 config = self.get_config() return config, input_ids, input_mask def get_config(self): return Kosmos2TextConfig( vocab_size=self.vocab_size, embed_dim=self.hidden_size, layers=self.num_hidden_layers, attention_heads=self.num_attention_heads, ffn_dim=self.intermediate_size, dropout=self.dropout, attention_dropout=self.attention_dropout, max_position_embeddings=self.max_position_embeddings, ) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, input_ids, input_mask = config_and_inputs inputs_dict = {"input_ids": input_ids, "attention_mask": input_mask} return config, inputs_dict class Kosmos2ModelTester: def __init__(self, parent, text_kwargs=None, vision_kwargs=None, latent_query_num=3, is_training=True): if text_kwargs is None: text_kwargs = {} if vision_kwargs is None: vision_kwargs = {} self.parent = parent self.text_model_tester = Kosmos2TextModelTester(parent, **text_kwargs) self.vision_model_tester = Kosmos2VisionModelTester(parent, **vision_kwargs) self.batch_size = self.text_model_tester.batch_size # need bs for batching_equivalence test self.seq_length = self.text_model_tester.seq_length self.latent_query_num = latent_query_num self.is_training = is_training def prepare_config_and_inputs(self): text_config, input_ids, attention_mask = self.text_model_tester.prepare_config_and_inputs() vision_config, pixel_values = self.vision_model_tester.prepare_config_and_inputs() # build `image_embeds_position_mask` image_embeds_position_mask = torch.zeros_like(input_ids) image_embeds_position_mask[:, 1 : 1 + self.latent_query_num :] = 1 config = self.get_config() return config, input_ids, attention_mask, image_embeds_position_mask, pixel_values def get_config(self): return Kosmos2Config( self.text_model_tester.get_config().to_dict(), self.vision_model_tester.get_config().to_dict(), latent_query_num=self.latent_query_num, ) def create_and_check_model(self, config, input_ids, attention_mask, image_embeds_position_mask, pixel_values): model = Kosmos2Model(config).to(torch_device).eval() with torch.no_grad(): result = model(pixel_values, input_ids, image_embeds_position_mask, attention_mask) self.parent.assertEqual( result.last_hidden_state.shape, (self.text_model_tester.batch_size, self.text_model_tester.seq_length, self.text_model_tester.hidden_size), ) self.parent.assertEqual( result.image_embeds.shape, (self.text_model_tester.batch_size, self.latent_query_num, self.text_model_tester.hidden_size), ) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, input_ids, attention_mask, image_embeds_position_mask, pixel_values = config_and_inputs inputs_dict = { "input_ids": input_ids, "attention_mask": attention_mask, "image_embeds_position_mask": image_embeds_position_mask, "pixel_values": pixel_values, } return config, inputs_dict @require_torch class Kosmos2ModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (Kosmos2Model, Kosmos2ForConditionalGeneration) if is_torch_available() else () additional_model_inputs = ["input_ids", "image_embeds_position_mask"] pipeline_model_mapping = ( { "feature-extraction": Kosmos2Model, "image-to-text": Kosmos2ForConditionalGeneration, "image-text-to-text": Kosmos2ForConditionalGeneration, } if is_torch_available() else {} ) fx_compatible = False test_head_masking = False test_pruning = False test_resize_embeddings = False test_attention_outputs = False _is_composite = True # TODO: `image-to-text` pipeline for this model needs Processor. # TODO: Tiny model needs fixing for `image-text-to-text` (latent_query_num=3 not compatible with num_image_tokens=64). def is_pipeline_test_to_skip( self, pipeline_test_case_name, config_class, model_architecture, tokenizer_name, image_processor_name, feature_extractor_name, processor_name, ): return ( pipeline_test_case_name == "ImageToTextPipelineTests" or pipeline_test_case_name == "ImageTextToTextPipelineTests" ) def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): inputs_dict = copy.deepcopy(inputs_dict) if return_labels: if model_class.__name__ == "Kosmos2ForConditionalGeneration": inputs_dict["labels"] = torch.zeros( (self.model_tester.text_model_tester.batch_size, self.model_tester.text_model_tester.seq_length), dtype=torch.long, device=torch_device, ) return inputs_dict def setUp(self): self.model_tester = Kosmos2ModelTester(self) self.config_tester = ConfigTester( self, config_class=Kosmos2Config, has_text_modality=False, common_properties=["latent_query_num"] ) def test_config(self): self.config_tester.run_common_tests() # overwrite from common to skip `image_to_text_projection.latent_query` def test_initialization(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() configs_no_init = _config_zero_init(config) for model_class in self.all_model_classes: model = model_class(config=configs_no_init) for name, param in model.named_parameters(): if param.requires_grad: if name == "image_to_text_projection.latent_query": # The original code use ` nn.Parameter(torch.randn(...))` for which this test won't pass. continue self.assertIn( ((param.data.mean() * 1e9).round() / 1e9).item(), [0.0, 1.0], msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] expected_arg_names = ["pixel_values"] self.assertListEqual(arg_names[:1], expected_arg_names) def test_load_save_without_tied_weights(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() config.text_config.tie_word_embeddings = False for model_class in self.all_model_classes: model = model_class(config) with tempfile.TemporaryDirectory() as d: model.save_pretrained(d) model_reloaded, infos = model_class.from_pretrained(d, output_loading_info=True) # Checking the state dicts are correct reloaded_state = model_reloaded.state_dict() for k, v in model.state_dict().items(): self.assertIn(k, reloaded_state, f"Key {k} is missing from reloaded") torch.testing.assert_close( v, reloaded_state[k], msg=lambda x: f"{model_class.__name__}: Tensor {k}: {x}" ) # Checking there was no complain of missing weights self.assertEqual(infos["missing_keys"], []) # overwrite from common in order to use `self.model_tester.text_model_tester.num_hidden_layers` def test_hidden_states_output(self): def check_hidden_states_output(inputs_dict, config, model_class): model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) hidden_states = outputs.hidden_states expected_num_layers = getattr( self.model_tester, "expected_num_hidden_layers", self.model_tester.text_model_tester.num_hidden_layers + 1, ) self.assertEqual(len(hidden_states), expected_num_layers) seq_length = self.model_tester.text_model_tester.seq_length self.assertListEqual( list(hidden_states[0].shape[-2:]), [seq_length, self.model_tester.text_model_tester.hidden_size], ) config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: inputs_dict["output_hidden_states"] = True check_hidden_states_output(inputs_dict, config, model_class) # check that output_hidden_states also work using config del inputs_dict["output_hidden_states"] config.output_hidden_states = True check_hidden_states_output(inputs_dict, config, model_class) # overwrite from common in order to use `config.text_config.vocab_size` instead of `config.vocab_size` def test_tie_model_weights(self): if not self.test_torchscript: self.skipTest(reason="test_torchscript is set to False") config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() def check_same_values(layer_1, layer_2): equal = True for p1, p2 in zip(layer_1.weight, layer_2.weight): if p1.data.ne(p2.data).sum() > 0: equal = False return equal for model_class in self.all_model_classes: config.torchscript = True model_not_tied = model_class(config) if model_not_tied.get_output_embeddings() is None: continue config_tied = copy.deepcopy(config) config_tied.torchscript = False model_tied = model_class(config_tied) params_tied = list(model_tied.parameters()) # Check that the embedding layer and decoding layer are the same in size and in value # self.assertTrue(check_same_values(embeddings, decoding)) # # Check that after modification, they remain the same. # embeddings.weight.data.div_(2) # # Check that the embedding layer and decoding layer are the same in size and in value # self.assertTrue(embeddings.weight.shape, decoding.weight.shape) # self.assertTrue(check_same_values(embeddings, decoding)) # # Check that after modification, they remain the same. # decoding.weight.data.div_(4) # # Check that the embedding layer and decoding layer are the same in size and in value # self.assertTrue(embeddings.weight.shape, decoding.weight.shape) # self.assertTrue(check_same_values(embeddings, decoding)) # Check that after resize they remain tied. model_tied.resize_token_embeddings(config.text_config.vocab_size + 10) params_tied_2 = list(model_tied.parameters()) self.assertEqual(len(params_tied_2), len(params_tied)) # decoding.weight.data.mul_(20) # # Check that the embedding layer and decoding layer are the same in size and in value # self.assertTrue(model.transformer.wte.weight.shape, model.lm_head.weight.shape) # self.assertTrue(check_same_values(model.transformer.wte, model.lm_head)) @parameterized.expand(TEST_EAGER_MATCHES_SDPA_INFERENCE_PARAMETERIZATION) @unittest.skip("KOSMOS-2 doesn't support padding") def test_eager_matches_sdpa_inference( self, name, dtype, padding_side, use_attention_mask, output_attentions, enable_kernels ): pass @unittest.skip("KOSMOS-2 doesn't support padding") def test_eager_padding_matches_padding_free_with_position_ids(self): pass @unittest.skip("KOSMOS-2 doesn't support padding") def test_sdpa_padding_matches_padding_free_with_position_ids(self): pass @pytest.mark.generate def test_left_padding_compatibility(self): # Overwrite because Kosmos-2 need to padd pixel values and pad image-attn-mask def _prepare_model_kwargs(input_ids, attention_mask, pad_size, signature): model_kwargs = {"input_ids": input_ids, "attention_mask": attention_mask} if "position_ids" in signature: position_ids = torch.cumsum(attention_mask, dim=-1) - 1 position_ids.masked_fill_(attention_mask == 0, 1) model_kwargs["position_ids"] = position_ids if "cache_position" in signature: cache_position = torch.arange(input_ids.shape[-1], device=torch_device) model_kwargs["cache_position"] = cache_position if "image_embeds_position_mask" in signature: image_embeds_position_mask = torch.zeros_like(input_ids) image_embeds_position_mask[:, (pad_size + 1) : pad_size + 1 + self.model_tester.latent_query_num] = 1 model_kwargs["image_embeds_position_mask"] = image_embeds_position_mask return model_kwargs for model_class in self.all_generative_model_classes: config, inputs_dict = self.prepare_config_and_inputs_for_generate() input_ids = inputs_dict["input_ids"] pixel_values = inputs_dict["pixel_values"] attention_mask = inputs_dict.get("attention_mask") if attention_mask is None: attention_mask = torch.ones_like(input_ids) model = model_class(config).to(torch_device).eval() signature = inspect.signature(model.forward).parameters.keys() # no cache as some models require special cache classes to be init outside forward model.generation_config.use_cache = False # Without padding model_kwargs = _prepare_model_kwargs(input_ids, attention_mask, pad_size=0, signature=signature) next_logits_wo_padding = model(**model_kwargs, pixel_values=pixel_values).logits[:, -1, :] # With left-padding (length 32) # can hardcode pad_token to be 0 as we'll do attn masking anyway pad_token_id = ( config.get_text_config().pad_token_id if config.get_text_config().pad_token_id is not None else 0 ) pad_size = (input_ids.shape[0], 32) padding = torch.ones(pad_size, dtype=input_ids.dtype, device=torch_device) * pad_token_id padded_input_ids = torch.cat((padding, input_ids), dim=1) padded_attention_mask = torch.cat((torch.zeros_like(padding), attention_mask), dim=1) model_kwargs = _prepare_model_kwargs( padded_input_ids, padded_attention_mask, pad_size=32, signature=signature ) next_logits_with_padding = model(**model_kwargs, pixel_values=pixel_values).logits[:, -1, :] # They should result in very similar logits torch.testing.assert_close(next_logits_wo_padding, next_logits_with_padding, rtol=1e-3, atol=1e-3) @slow def test_model_from_pretrained(self): model_name = "microsoft/kosmos-2-patch14-224" model = Kosmos2Model.from_pretrained(model_name) self.assertIsNotNone(model) def _create_and_check_torchscript(self, config, inputs_dict): if not self.test_torchscript: self.skipTest(reason="test_torchscript is set to False") configs_no_init = _config_zero_init(config) # To be sure we have no Nan configs_no_init.torchscript = True for model_class in self.all_model_classes: model = model_class(config=configs_no_init) model.to(torch_device) model.eval() inputs = self._prepare_for_class(inputs_dict, model_class) main_input_name = model_class.main_input_name try: main_input = inputs[main_input_name] model(main_input, inputs["input_ids"], inputs["image_embeds_position_mask"]) traced_model = torch.jit.trace( model, (main_input, inputs["input_ids"], inputs["image_embeds_position_mask"]) ) except RuntimeError: self.fail("Couldn't trace module.") with tempfile.TemporaryDirectory() as tmp_dir_name: pt_file_name = os.path.join(tmp_dir_name, "traced_model.pt") try: torch.jit.save(traced_model, pt_file_name) except Exception: self.fail("Couldn't save module.") try: loaded_model = torch.jit.load(pt_file_name) except Exception: self.fail("Couldn't load module.") model.to(torch_device) model.eval() loaded_model.to(torch_device) loaded_model.eval() model_state_dict = model.state_dict() loaded_model_state_dict = loaded_model.state_dict() non_persistent_buffers = {} for key in loaded_model_state_dict: if key not in model_state_dict: non_persistent_buffers[key] = loaded_model_state_dict[key] loaded_model_state_dict = { key: value for key, value in loaded_model_state_dict.items() if key not in non_persistent_buffers } self.assertEqual(set(model_state_dict.keys()), set(loaded_model_state_dict.keys())) model_buffers = list(model.buffers()) for non_persistent_buffer in non_persistent_buffers.values(): found_buffer = False for i, model_buffer in enumerate(model_buffers): if torch.equal(non_persistent_buffer, model_buffer): found_buffer = True break self.assertTrue(found_buffer) model_buffers.pop(i) models_equal = True for layer_name, p1 in model_state_dict.items(): if layer_name in loaded_model_state_dict: p2 = loaded_model_state_dict[layer_name] if p1.data.ne(p2.data).sum() > 0: models_equal = False self.assertTrue(models_equal) # Avoid memory leak. Without this, each call increase RAM usage by ~20MB. # (Even with this call, there are still memory leak by ~0.04MB) self.clear_torch_jit_class_registry() @pytest.mark.generate @parameterized.expand([("greedy", 1), ("beam search", 2)]) def test_generate_from_inputs_embeds(self, _, num_beams): """Tests that we can generate from `inputs_embeds` instead of `input_ids` in LLMs, VLMs, etc""" # NOTE: overwritten because Kosmos with ids prepares position ids differently from embeds # If the model get ids, all pad tokens are masked from position ids. That is not possible with embeds # When supported, tests that the decoder model can generate from `inputs_embeds` instead of `input_ids` # if fails, you should probably update the `prepare_inputs_for_generation` function for model_class in self.all_generative_model_classes: config, inputs_dict = self.prepare_config_and_inputs_for_generate() config.is_decoder = True # Skip models without explicit support model = model_class(config).to(torch_device).eval() # Traditional way of generating text input_ids = inputs_dict.pop("input_ids") input_ids[input_ids == config.get_text_config().pad_token_id] = 0 generation_kwargs = { "return_dict_in_generate": True, "output_scores": True, "num_beams": num_beams, "do_sample": False, "max_new_tokens": 5, "min_new_tokens": 5, # generate exactly 5 tokens "use_cache": True, } outputs_from_ids = model.generate(input_ids=input_ids, **generation_kwargs, **inputs_dict) self.assertEqual(outputs_from_ids.sequences.shape[:2], (input_ids.shape[0], input_ids.shape[1] + 5)) # Same thing, but from input embeddings (`input_ids` is passed so the prompt is present in the output). # The output of the two calls should be the same. inputs_embeds = model.get_input_embeddings()(input_ids) outputs_from_embeds = model.generate( input_ids=input_ids, inputs_embeds=inputs_embeds, **generation_kwargs, **inputs_dict ) self.assertTrue(has_similar_generate_outputs(outputs_from_ids, outputs_from_embeds)) # input_ids is not a required input on most models -- if we don't pass it, the newly generated tokens will # be the same outputs_from_embeds_wo_ids = model.generate( inputs_embeds=inputs_embeds, **generation_kwargs, **inputs_dict ) outputs_from_embeds.sequences = outputs_from_embeds.sequences[:, inputs_embeds.shape[1] :] self.assertTrue(has_similar_generate_outputs(outputs_from_embeds_wo_ids, outputs_from_embeds)) # We will verify our results on an image of cute cats def prepare_img(): url = "https://huggingface.co/hf-internal-testing/Kosmos2-test-image/resolve/main/demo.jpg" im = Image.open(requests.get(url, stream=True).raw) return im @require_vision @require_torch @slow class Kosmos2ModelIntegrationTest(unittest.TestCase): def run_example(self, prompt, image, model, processor): inputs = processor(text=prompt, images=image, return_tensors="pt", padding=True).to(torch_device) generation_outputs = model.generate( pixel_values=inputs["pixel_values"], input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], image_embeds=None, image_embeds_position_mask=inputs["image_embeds_position_mask"], use_cache=True, max_new_tokens=128, output_scores=True, return_dict_in_generate=True, ) scores = generation_outputs.scores generated_ids = generation_outputs.sequences generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) # Specify `cleanup_and_extract=False` in order to see the raw model generation. processed_text = [processor.post_process_generation(x, cleanup_and_extract=False) for x in generated_text] # By default, the generated text is cleanup and the entities are extracted. final_text_with_entities = [processor.post_process_generation(x) for x in generated_text] return scores, generated_ids, generated_text, processed_text, final_text_with_entities def test_snowman_image_captioning(self): url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.png" image = Image.open(requests.get(url, stream=True).raw) image.save("new_image.jpg") image = Image.open("new_image.jpg") model = AutoModelForImageTextToText.from_pretrained("microsoft/kosmos-2-patch14-224").to(torch_device) processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224") prompt = "<grounding>An image of" scores, generated_ids, generated_text, processed_text, final_text_with_entities = self.run_example( prompt, image, model, processor ) processed_text = processed_text[0] final_text, entities = final_text_with_entities[0] atol = 1e-4 if (IS_ROCM_SYSTEM or IS_XPU_SYSTEM) else 1e-5 np.testing.assert_allclose( torch.concat(scores[1:4])[:3, :3].to("cpu").numpy(), np.array( [ [-1.5672581195831299, -5.007406711578369, 4.36448860168457], [-2.147017002105713, -4.966302871704102, 4.592559337615967], [-0.9352350831031799, -4.688288688659668, 6.240612983703613], ] ), atol=atol, ) np.testing.assert_allclose( torch.concat(scores[-3:])[-3:, -3:].to("cpu").numpy(), np.array( [ [2.9916205406188965, 2.481820583343506, 4.646594524383545], [-2.8381078243255615, -2.9687185287475586, -2.6926779747009277], [-2.8909168243408203, -3.2228589057922363, -1.7056822776794434], ] ), atol=1e-5, ) # fmt: off EXPECTED_IDS = [ [ 0, 64003, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 64004, 64012, 712, 1648, 9, 64007, 10, 43867, 64008, 64009, 64057, 64876, 64010, 5950, 597, 32, 64007, 10, 646, 64008, 64009, 64018, 64924, 64010, 4, 2 ] ] # fmt: on self.assertListEqual(generated_ids.to("cpu").numpy().tolist(), EXPECTED_IDS) EXPECTED_PROCESSED_TEXT = ( "<grounding> An image of<phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> " "warming himself by<phrase> a fire</phrase><object><patch_index_0005><patch_index_0911></object>." ) self.assertEqual(processed_text, EXPECTED_PROCESSED_TEXT) self.assertEqual(final_text, "An image of a snowman warming himself by a fire.") EXPECTED_ENTITIES = [ ("a snowman", (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ("a fire", (41, 47), [(0.171875, 0.015625, 0.484375, 0.890625)]), ] self.assertListEqual(entities, EXPECTED_ENTITIES) # test with the detail caption generation prompt = "<grounding>Describe this image in detail:" scores, generated_ids, generated_text, processed_text, final_text_with_entities = self.run_example( prompt, image, model, processor ) processed_text = processed_text[0] final_text, entities = final_text_with_entities[0] np.testing.assert_allclose( torch.concat(scores[1:4])[:3, :3].to("cpu").numpy(), np.array( [ [-0.9093570113182068, -4.578373908996582, 5.96360969543457], [2.452126979827881, -4.090598106384277, 8.738677024841309], [-0.7624598741531372, -4.771658897399902, 6.576295852661133], ] ), atol=atol, ) np.testing.assert_allclose( torch.concat(scores[-3:])[-3:, -3:].to("cpu").numpy(), np.array( [ [-1.673659086227417, -2.162452220916748, -1.95430588722229], [-2.006824493408203, -2.2038745880126953, -1.24686861038208], [-3.2783470153808594, -2.814181089401245, -1.390632152557373], ] ), atol=1e-5, ) # fmt: off EXPECTED_IDS_LONG = [ [ 0, 64003, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 64004, 64012, 34645, 247, 38, 1648, 12, 3391, 55, 24, 1648, 1338, 10, 43867, 1280, 32, 64007, 10, 30879, 64008, 64009, 64018, 65020, 64010, 12, 5, 1842, 4, 71, 17, 1679, 64007, 10, 3958, 64008, 64009, 64061, 64263, 64010, 6, 64007, 15719, 64008, 64009, 64253, 64617, 64010, 6, 8, 64007, 9626, 64008, 64009, 64413, 64545, 64010, 6, 23, 64007, 10, 4363, 64008, 64009, 64623, 64885, 64010, 2255, 8, 64007, 10, 3486, 64008, 64009, 64809, 65036, 64010, 1560, 2255, 4, 24, 43867, 1684, 7, 27, 3774, 5, 10356, 9, 5, 646, 6, 8, 22, 1684, 7, 30, 10, 2007, 8, 16239, 4337, 4, 2 ] ] # fmt: on self.assertListEqual(generated_ids.to("cpu").numpy().tolist(), EXPECTED_IDS_LONG) EXPECTED_PROCESSED_TEXT_LONG = ( "<grounding> Describe this image in detail: The image features a snowman sitting by<phrase> a campfire" "</phrase><object><patch_index_0005><patch_index_1007></object> in the snow. He is wearing<phrase> a hat" "</phrase><object><patch_index_0048><patch_index_0250></object>,<phrase> scarf</phrase><object>" "<patch_index_0240><patch_index_0604></object>, and<phrase> gloves</phrase><object><patch_index_0400>" "<patch_index_0532></object>, with<phrase> a pot</phrase><object><patch_index_0610><patch_index_0872>" "</object> nearby and<phrase> a cup</phrase><object><patch_index_0796><patch_index_1023></object> placed " "nearby. The snowman appears to be enjoying the warmth of the fire, and it appears to have a warm and cozy " "atmosphere." ) self.assertEqual(processed_text, EXPECTED_PROCESSED_TEXT_LONG) EXPECTED_FINAL_TEXT_LONG = ( "Describe this image in detail: The image features a snowman sitting by a campfire in the snow. He is " "wearing a hat, scarf, and gloves, with a pot nearby and a cup placed nearby. The snowman appears to be " "enjoying the warmth of the fire, and it appears to have a warm and cozy atmosphere." ) self.assertEqual(final_text, EXPECTED_FINAL_TEXT_LONG) EXPECTED_ENTITIES_LONG = [ ("a campfire", (71, 81), [(0.171875, 0.015625, 0.484375, 0.984375)]), ("a hat", (109, 114), [(0.515625, 0.046875, 0.828125, 0.234375)]), ("scarf", (116, 121), [(0.515625, 0.234375, 0.890625, 0.578125)]), ("gloves", (127, 133), [(0.515625, 0.390625, 0.640625, 0.515625)]), ("a pot", (140, 145), [(0.078125, 0.609375, 0.265625, 0.859375)]), ("a cup", (157, 162), [(0.890625, 0.765625, 0.984375, 0.984375)]), ] self.assertListEqual(entities, EXPECTED_ENTITIES_LONG) def test_snowman_image_captioning_batch(self): url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.png" image = Image.open(requests.get(url, stream=True).raw) image.save("new_image.jpg") image = Image.open("new_image.jpg") model = AutoModelForImageTextToText.from_pretrained("microsoft/kosmos-2-patch14-224").to(torch_device) prompt = ["<grounding>Describe this image in detail:", "<grounding>An image of"] # left padding processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224", padding_side="left") scores, generated_ids, generated_text, processed_text, final_text_with_entities = self.run_example( prompt, [image] * len(prompt), model, processor ) all_final_text = [x[0] for x in final_text_with_entities] all_entities = [x[1] for x in final_text_with_entities] # left padding gives identical results as non-padding EXPECTED_PROCESSED_TEXT_0 = ( "<grounding> Describe this image in detail: The image features a snowman sitting by<phrase> a campfire" "</phrase><object><patch_index_0005><patch_index_1007></object> in the snow. He is wearing<phrase> a hat" "</phrase><object><patch_index_0048><patch_index_0250></object>,<phrase> scarf</phrase><object>" "<patch_index_0240><patch_index_0604></object>, and<phrase> gloves</phrase><object><patch_index_0400>" "<patch_index_0532></object>, with<phrase> a pot</phrase><object><patch_index_0610><patch_index_0872>" "</object> nearby and<phrase> a cup</phrase><object><patch_index_0796><patch_index_1023></object> placed " "nearby. The snowman appears to be enjoying the warmth of the fire, and it appears to have a warm and cozy " "atmosphere." ) EXPECTED_PROCESSED_TEXT_1 = ( "<grounding> An image of<phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> " "warming himself by<phrase> a fire</phrase><object><patch_index_0005><patch_index_0911></object>." ) self.assertListEqual(processed_text, [EXPECTED_PROCESSED_TEXT_0, EXPECTED_PROCESSED_TEXT_1]) EXPECTED_FINAL_TEXT_0 = ( "Describe this image in detail: The image features a snowman sitting by a campfire in the snow. He is " "wearing a hat, scarf, and gloves, with a pot nearby and a cup placed nearby. The snowman appears to be " "enjoying the warmth of the fire, and it appears to have a warm and cozy atmosphere." ) EXPECTED_FINAL_TEXT_1 = "An image of a snowman warming himself by a fire." self.assertListEqual(all_final_text, [EXPECTED_FINAL_TEXT_0, EXPECTED_FINAL_TEXT_1]) EXPECTED_ENTITIES_0 = [ ("a campfire", (71, 81), [(0.171875, 0.015625, 0.484375, 0.984375)]), ("a hat", (109, 114), [(0.515625, 0.046875, 0.828125, 0.234375)]), ("scarf", (116, 121), [(0.515625, 0.234375, 0.890625, 0.578125)]), ("gloves", (127, 133), [(0.515625, 0.390625, 0.640625, 0.515625)]), ("a pot", (140, 145), [(0.078125, 0.609375, 0.265625, 0.859375)]), ("a cup", (157, 162), [(0.890625, 0.765625, 0.984375, 0.984375)]), ] EXPECTED_ENTITIES_1 = [ ("a snowman", (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ("a fire", (41, 47), [(0.171875, 0.015625, 0.484375, 0.890625)]), ] self.assertListEqual(all_entities, [EXPECTED_ENTITIES_0, EXPECTED_ENTITIES_1]) # right padding processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224") scores, generated_ids, generated_text, processed_text, final_text_with_entities = self.run_example( prompt, [image] * len(prompt), model, processor ) all_final_text = [x[0] for x in final_text_with_entities] all_entities = [x[1] for x in final_text_with_entities] # For right padding, only the non-padded sequences will give the same results as non-padding self.assertEqual(processed_text[0], EXPECTED_PROCESSED_TEXT_0) self.assertEqual(all_final_text[0], EXPECTED_FINAL_TEXT_0) self.assertListEqual(all_entities[0], EXPECTED_ENTITIES_0) @slow def test_inference_interpolate_pos_encoding(self): # ViT models have an `interpolate_pos_encoding` argument in their forward method, # allowing to interpolate the pre-trained position embeddings in order to use # the model on higher resolutions. The DINO model by Facebook AI leverages this # to visualize self-attention on higher resolution images. model = Kosmos2Model.from_pretrained("microsoft/kosmos-2-patch14-224").to(torch_device) processor = AutoProcessor.from_pretrained( "microsoft/kosmos-2-patch14-224", size={"shortest_edge": 180}, crop_size={"height": 180, "width": 180} ) image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png") inputs = processor(text="what's in the image", images=image, return_tensors="pt").to(torch_device) # interpolate_pos_encodiung false should return value error with self.assertRaises(ValueError, msg="doesn't match model"): with torch.no_grad(): model(**inputs, interpolate_pos_encoding=False) # forward pass with torch.no_grad(): outputs = model(**inputs, interpolate_pos_encoding=True) # verify the logits expected_shape = torch.Size((1, 145, 1024)) self.assertEqual(outputs.vision_model_output.last_hidden_state.shape, expected_shape) expected_slice = torch.tensor( [[0.9148, -1.4148, 3.8040], [3.3443, 1.9478, 0.2080], [1.6604, 2.8184, -0.3618]] ).to(torch_device) torch.testing.assert_close( outputs.vision_model_output.last_hidden_state[0, :3, :3], expected_slice, rtol=1e-2, atol=1e-2 )
transformers/tests/models/kosmos2/test_modeling_kosmos2.py/0
{ "file_path": "transformers/tests/models/kosmos2/test_modeling_kosmos2.py", "repo_id": "transformers", "token_count": 19835 }
589
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import shutil import tempfile import unittest from transformers import AutoProcessor, AutoTokenizer, LlamaTokenizerFast, LlavaProcessor from transformers.testing_utils import require_vision from transformers.utils import is_vision_available from ...test_processing_common import ProcessorTesterMixin if is_vision_available(): from transformers import CLIPImageProcessor @require_vision class LlavaProcessorTest(ProcessorTesterMixin, unittest.TestCase): processor_class = LlavaProcessor @classmethod def setUpClass(cls): cls.tmpdirname = tempfile.mkdtemp() image_processor = CLIPImageProcessor(do_center_crop=False) tokenizer = LlamaTokenizerFast.from_pretrained("huggyllama/llama-7b") tokenizer.add_special_tokens({"additional_special_tokens": ["<image>"]}) processor_kwargs = cls.prepare_processor_dict() processor = LlavaProcessor(image_processor, tokenizer, **processor_kwargs) processor.save_pretrained(cls.tmpdirname) cls.image_token = processor.image_token def get_tokenizer(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).tokenizer def get_image_processor(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).image_processor @classmethod def tearDownClass(cls): shutil.rmtree(cls.tmpdirname, ignore_errors=True) @staticmethod def prepare_processor_dict(): return { "chat_template": "{% for message in messages %}{% if message['role'] != 'system' %}{{ message['role'].upper() + ': '}}{% endif %}{# Render all images first #}{% for content in message['content'] | selectattr('type', 'equalto', 'image') %}{{ '<image>\n' }}{% endfor %}{# Render all text next #}{% if message['role'] != 'assistant' %}{% for content in message['content'] | selectattr('type', 'equalto', 'text') %}{{ content['text'] + ' '}}{% endfor %}{% else %}{% for content in message['content'] | selectattr('type', 'equalto', 'text') %}{% generation %}{{ content['text'] + ' '}}{% endgeneration %}{% endfor %}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ 'ASSISTANT:' }}{% endif %}", "patch_size": 128, "vision_feature_select_strategy": "default" } # fmt: skip def test_get_num_vision_tokens(self): "Tests general functionality of the helper used internally in vLLM" processor = self.get_processor() output = processor._get_num_multimodal_tokens(image_sizes=[(100, 100), (300, 100), (500, 30)]) self.assertTrue("num_image_tokens" in output) self.assertEqual(len(output["num_image_tokens"]), 3) self.assertTrue("num_image_patches" in output) self.assertEqual(len(output["num_image_patches"]), 3) def test_chat_template_is_saved(self): processor_loaded = self.processor_class.from_pretrained(self.tmpdirname) processor_dict_loaded = json.loads(processor_loaded.to_json_string()) # chat templates aren't serialized to json in processors self.assertFalse("chat_template" in processor_dict_loaded) # they have to be saved as separate file and loaded back from that file # so we check if the same template is loaded processor_dict = self.prepare_processor_dict() self.assertTrue(processor_loaded.chat_template == processor_dict.get("chat_template", None)) def test_can_load_various_tokenizers(self): for checkpoint in ["Intel/llava-gemma-2b", "llava-hf/llava-1.5-7b-hf"]: processor = LlavaProcessor.from_pretrained(checkpoint) tokenizer = AutoTokenizer.from_pretrained(checkpoint) self.assertEqual(processor.tokenizer.__class__, tokenizer.__class__) def test_special_mm_token_truncation(self): """Tests that special vision tokens do not get truncated when `truncation=True` is set.""" processor = LlavaProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf") input_str = self.prepare_text_inputs(batch_size=2, modality="image") image_input = self.prepare_image_inputs(batch_size=2) _ = processor( text=input_str, images=image_input, return_tensors="pt", truncation=None, padding=True, ) with self.assertRaises(ValueError): _ = processor( text=input_str, images=image_input, return_tensors="pt", truncation=True, padding=True, max_length=5, )
transformers/tests/models/llava/test_processing_llava.py/0
{ "file_path": "transformers/tests/models/llava/test_processing_llava.py", "repo_id": "transformers", "token_count": 2005 }
590
# Copyright 2022 Tsimur Hadeliya. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the Longformer tokenizer.""" import itertools import json import os import unittest from functools import lru_cache from transformers import AddedToken, LongformerTokenizer, LongformerTokenizerFast from transformers.models.longformer.tokenization_longformer import VOCAB_FILES_NAMES from transformers.testing_utils import require_tokenizers, slow from ...test_tokenization_common import TokenizerTesterMixin, use_cache_if_possible @require_tokenizers # Copied from tests.models.roberta.test_tokenization_roberta.RobertaTokenizationTest with FacebookAI/roberta-base->allenai/longformer-base-4096,Roberta->Longformer,roberta->longformer, class LongformerTokenizationTest(TokenizerTesterMixin, unittest.TestCase): from_pretrained_id = "allenai/longformer-base-4096" # Ignore copy tokenizer_class = LongformerTokenizer test_slow_tokenizer = True rust_tokenizer_class = LongformerTokenizerFast test_rust_tokenizer = True @classmethod def setUpClass(cls): super().setUpClass() # Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt vocab = [ "l", "o", "w", "e", "r", "s", "t", "i", "d", "n", "\u0120", "\u0120l", "\u0120n", "\u0120lo", "\u0120low", "er", "\u0120lowest", "\u0120newer", "\u0120wider", "<unk>", ] vocab_tokens = dict(zip(vocab, range(len(vocab)))) merges = ["#version: 0.2", "\u0120 l", "\u0120l o", "\u0120lo w", "e r", ""] cls.special_tokens_map = {"unk_token": "<unk>"} cls.vocab_file = os.path.join(cls.tmpdirname, VOCAB_FILES_NAMES["vocab_file"]) cls.merges_file = os.path.join(cls.tmpdirname, VOCAB_FILES_NAMES["merges_file"]) with open(cls.vocab_file, "w", encoding="utf-8") as fp: fp.write(json.dumps(vocab_tokens) + "\n") with open(cls.merges_file, "w", encoding="utf-8") as fp: fp.write("\n".join(merges)) @classmethod @use_cache_if_possible @lru_cache(maxsize=64) def get_tokenizer(cls, pretrained_name=None, **kwargs): kwargs.update(cls.special_tokens_map) pretrained_name = pretrained_name or cls.tmpdirname return cls.tokenizer_class.from_pretrained(pretrained_name, **kwargs) @classmethod @use_cache_if_possible @lru_cache(maxsize=64) def get_rust_tokenizer(cls, pretrained_name=None, **kwargs): kwargs.update(cls.special_tokens_map) pretrained_name = pretrained_name or cls.tmpdirname return cls.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) def get_input_output_texts(self, tokenizer): input_text = "lower newer" output_text = "lower newer" return input_text, output_text def test_full_tokenizer(self): tokenizer = self.tokenizer_class(self.vocab_file, self.merges_file, **self.special_tokens_map) text = "lower newer" bpe_tokens = ["l", "o", "w", "er", "\u0120", "n", "e", "w", "er"] tokens = tokenizer.tokenize(text) # , add_prefix_space=True) self.assertListEqual(tokens, bpe_tokens) input_tokens = tokens + [tokenizer.unk_token] input_bpe_tokens = [0, 1, 2, 15, 10, 9, 3, 2, 15, 19] self.assertListEqual(tokenizer.convert_tokens_to_ids(input_tokens), input_bpe_tokens) def longformer_dict_integration_testing(self): tokenizer = self.get_tokenizer() self.assertListEqual(tokenizer.encode("Hello world!", add_special_tokens=False), [0, 31414, 232, 328, 2]) self.assertListEqual( tokenizer.encode("Hello world! cécé herlolip 418", add_special_tokens=False), [0, 31414, 232, 328, 740, 1140, 12695, 69, 46078, 1588, 2], ) @slow def test_sequence_builders(self): tokenizer = self.tokenizer_class.from_pretrained("allenai/longformer-base-4096") text = tokenizer.encode("sequence builders", add_special_tokens=False) text_2 = tokenizer.encode("multi-sequence build", add_special_tokens=False) encoded_text_from_decode = tokenizer.encode( "sequence builders", add_special_tokens=True, add_prefix_space=False ) encoded_pair_from_decode = tokenizer.encode( "sequence builders", "multi-sequence build", add_special_tokens=True, add_prefix_space=False ) encoded_sentence = tokenizer.build_inputs_with_special_tokens(text) encoded_pair = tokenizer.build_inputs_with_special_tokens(text, text_2) assert encoded_sentence == encoded_text_from_decode assert encoded_pair == encoded_pair_from_decode def test_space_encoding(self): tokenizer = self.get_tokenizer() sequence = "Encode this sequence." space_encoding = tokenizer.byte_encoder[b" "[0]] # Testing encoder arguments encoded = tokenizer.encode(sequence, add_special_tokens=False, add_prefix_space=False) first_char = tokenizer.convert_ids_to_tokens(encoded[0])[0] self.assertNotEqual(first_char, space_encoding) encoded = tokenizer.encode(sequence, add_special_tokens=False, add_prefix_space=True) first_char = tokenizer.convert_ids_to_tokens(encoded[0])[0] self.assertEqual(first_char, space_encoding) tokenizer.add_special_tokens({"bos_token": "<s>"}) encoded = tokenizer.encode(sequence, add_special_tokens=True) first_char = tokenizer.convert_ids_to_tokens(encoded[1])[0] self.assertNotEqual(first_char, space_encoding) # Testing spaces after special tokens mask = "<mask>" tokenizer.add_special_tokens( {"mask_token": AddedToken(mask, lstrip=True, rstrip=False)} ) # mask token has a left space mask_ind = tokenizer.convert_tokens_to_ids(mask) sequence = "Encode <mask> sequence" sequence_nospace = "Encode <mask>sequence" encoded = tokenizer.encode(sequence) mask_loc = encoded.index(mask_ind) first_char = tokenizer.convert_ids_to_tokens(encoded[mask_loc + 1])[0] self.assertEqual(first_char, space_encoding) encoded = tokenizer.encode(sequence_nospace) mask_loc = encoded.index(mask_ind) first_char = tokenizer.convert_ids_to_tokens(encoded[mask_loc + 1])[0] self.assertNotEqual(first_char, space_encoding) @unittest.skip def test_pretokenized_inputs(self): pass def test_embeded_special_tokens(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest(f"{tokenizer.__class__.__name__} ({pretrained_name})"): tokenizer_r = self.get_rust_tokenizer(pretrained_name, **kwargs) tokenizer_p = self.get_tokenizer(pretrained_name, **kwargs) sentence = "A, <mask> AllenNLP sentence." tokens_r = tokenizer_r.encode_plus(sentence, add_special_tokens=True, return_token_type_ids=True) tokens_p = tokenizer_p.encode_plus(sentence, add_special_tokens=True, return_token_type_ids=True) # token_type_ids should put 0 everywhere self.assertEqual(sum(tokens_r["token_type_ids"]), sum(tokens_p["token_type_ids"])) # attention_mask should put 1 everywhere, so sum over length should be 1 self.assertEqual( sum(tokens_r["attention_mask"]) / len(tokens_r["attention_mask"]), sum(tokens_p["attention_mask"]) / len(tokens_p["attention_mask"]), ) tokens_r_str = tokenizer_r.convert_ids_to_tokens(tokens_r["input_ids"]) tokens_p_str = tokenizer_p.convert_ids_to_tokens(tokens_p["input_ids"]) # Rust correctly handles the space before the mask while python doesn't self.assertSequenceEqual(tokens_p["input_ids"], [0, 250, 6, 50264, 3823, 487, 21992, 3645, 4, 2]) self.assertSequenceEqual(tokens_r["input_ids"], [0, 250, 6, 50264, 3823, 487, 21992, 3645, 4, 2]) self.assertSequenceEqual( tokens_p_str, ["<s>", "A", ",", "<mask>", "ĠAllen", "N", "LP", "Ġsentence", ".", "</s>"] ) self.assertSequenceEqual( tokens_r_str, ["<s>", "A", ",", "<mask>", "ĠAllen", "N", "LP", "Ġsentence", ".", "</s>"] ) def test_change_add_prefix_space_and_trim_offsets_args(self): for trim_offsets, add_prefix_space in itertools.product([True, False], repeat=2): tokenizer_r = self.get_rust_tokenizer( self.tmpdirname, use_fast=True, add_prefix_space=add_prefix_space, trim_offsets=trim_offsets ) pre_tokenizer_state = json.loads(tokenizer_r.backend_tokenizer.pre_tokenizer.__getstate__()) post_processor_state = json.loads(tokenizer_r.backend_tokenizer.post_processor.__getstate__()) self.assertEqual(pre_tokenizer_state["add_prefix_space"], add_prefix_space) self.assertEqual(post_processor_state["add_prefix_space"], add_prefix_space) self.assertEqual(post_processor_state["trim_offsets"], trim_offsets) def test_offsets_mapping_with_different_add_prefix_space_and_trim_space_arguments(self): # Test which aims to verify that the offsets are well adapted to the argument `add_prefix_space` and # `trim_offsets` for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest(f"{tokenizer.__class__.__name__} ({pretrained_name})"): text_of_1_token = "hello" # `hello` is a token in the vocabulary of `pretrained_name` text = f"{text_of_1_token} {text_of_1_token}" tokenizer_r = self.get_rust_tokenizer( pretrained_name, use_fast=True, add_prefix_space=True, trim_offsets=True ) encoding = tokenizer_r(text, return_offsets_mapping=True, add_special_tokens=False) self.assertEqual(encoding.offset_mapping[0], (0, len(text_of_1_token))) self.assertEqual( encoding.offset_mapping[1], (len(text_of_1_token) + 1, len(text_of_1_token) + 1 + len(text_of_1_token)), ) tokenizer_r = self.get_rust_tokenizer( pretrained_name, use_fast=True, add_prefix_space=False, trim_offsets=True ) encoding = tokenizer_r(text, return_offsets_mapping=True, add_special_tokens=False) self.assertEqual(encoding.offset_mapping[0], (0, len(text_of_1_token))) self.assertEqual( encoding.offset_mapping[1], (len(text_of_1_token) + 1, len(text_of_1_token) + 1 + len(text_of_1_token)), ) tokenizer_r = self.get_rust_tokenizer( pretrained_name, use_fast=True, add_prefix_space=True, trim_offsets=False ) encoding = tokenizer_r(text, return_offsets_mapping=True, add_special_tokens=False) self.assertEqual(encoding.offset_mapping[0], (0, len(text_of_1_token))) self.assertEqual( encoding.offset_mapping[1], (len(text_of_1_token), len(text_of_1_token) + 1 + len(text_of_1_token)), ) tokenizer_r = self.get_rust_tokenizer( pretrained_name, use_fast=True, add_prefix_space=False, trim_offsets=False ) encoding = tokenizer_r(text, return_offsets_mapping=True, add_special_tokens=False) self.assertEqual(encoding.offset_mapping[0], (0, len(text_of_1_token))) self.assertEqual( encoding.offset_mapping[1], (len(text_of_1_token), len(text_of_1_token) + 1 + len(text_of_1_token)), ) text = f" {text}" # tokenizer_r = self.rust_tokenizer_class.from_pretrained( # pretrained_name, use_fast=True, add_prefix_space=True, trim_offsets=True # ) # encoding = tokenizer_r(text, return_offsets_mapping=True, add_special_tokens=False) # self.assertEqual(encoding.offset_mapping[0], (1, 1 + len(text_of_1_token))) # self.assertEqual( # encoding.offset_mapping[1], # (1 + len(text_of_1_token) + 1, 1 + len(text_of_1_token) + 1 + len(text_of_1_token)), # ) tokenizer_r = self.get_rust_tokenizer( pretrained_name, use_fast=True, add_prefix_space=False, trim_offsets=True ) encoding = tokenizer_r(text, return_offsets_mapping=True, add_special_tokens=False) self.assertEqual(encoding.offset_mapping[0], (1, 1 + len(text_of_1_token))) self.assertEqual( encoding.offset_mapping[1], (1 + len(text_of_1_token) + 1, 1 + len(text_of_1_token) + 1 + len(text_of_1_token)), ) tokenizer_r = self.get_rust_tokenizer( pretrained_name, use_fast=True, add_prefix_space=True, trim_offsets=False ) encoding = tokenizer_r(text, return_offsets_mapping=True, add_special_tokens=False) self.assertEqual(encoding.offset_mapping[0], (0, 1 + len(text_of_1_token))) self.assertEqual( encoding.offset_mapping[1], (1 + len(text_of_1_token), 1 + len(text_of_1_token) + 1 + len(text_of_1_token)), ) tokenizer_r = self.get_rust_tokenizer( pretrained_name, use_fast=True, add_prefix_space=False, trim_offsets=False ) encoding = tokenizer_r(text, return_offsets_mapping=True, add_special_tokens=False) self.assertEqual(encoding.offset_mapping[0], (0, 1 + len(text_of_1_token))) self.assertEqual( encoding.offset_mapping[1], (1 + len(text_of_1_token), 1 + len(text_of_1_token) + 1 + len(text_of_1_token)), )
transformers/tests/models/longformer/test_tokenization_longformer.py/0
{ "file_path": "transformers/tests/models/longformer/test_tokenization_longformer.py", "repo_id": "transformers", "token_count": 7229 }
591
# Copyright 2021, The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch MBART model.""" import copy import tempfile import unittest from transformers import MBartConfig, is_torch_available from transformers.testing_utils import ( require_sentencepiece, require_tokenizers, require_torch, require_torch_fp16, slow, torch_device, ) from transformers.utils import cached_property from ...generation.test_utils import GenerationTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from transformers import ( AutoTokenizer, BatchEncoding, MBartForCausalLM, MBartForConditionalGeneration, MBartForQuestionAnswering, MBartForSequenceClassification, MBartModel, ) from transformers.models.mbart.modeling_mbart import MBartDecoder, MBartEncoder def prepare_mbart_inputs_dict( config, input_ids, decoder_input_ids, attention_mask=None, decoder_attention_mask=None, ): if attention_mask is None: attention_mask = input_ids.ne(config.pad_token_id) if decoder_attention_mask is None: decoder_attention_mask = decoder_input_ids.ne(config.pad_token_id) return { "input_ids": input_ids, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "decoder_attention_mask": attention_mask, } class MBartModelTester: def __init__( self, parent, batch_size=13, seq_length=7, is_training=True, use_labels=False, vocab_size=99, hidden_size=16, num_hidden_layers=2, num_attention_heads=4, intermediate_size=4, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=100, eos_token_id=2, pad_token_id=1, bos_token_id=0, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_labels = use_labels self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.max_position_embeddings = max_position_embeddings self.eos_token_id = eos_token_id self.pad_token_id = pad_token_id self.bos_token_id = bos_token_id def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size).clamp( 3, ) input_ids[:, -1] = self.eos_token_id # Eos Token decoder_input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) config = self.get_config() inputs_dict = prepare_mbart_inputs_dict(config, input_ids, decoder_input_ids) return config, inputs_dict def get_config(self): return MBartConfig( vocab_size=self.vocab_size, d_model=self.hidden_size, encoder_layers=self.num_hidden_layers, decoder_layers=self.num_hidden_layers, encoder_attention_heads=self.num_attention_heads, decoder_attention_heads=self.num_attention_heads, encoder_ffn_dim=self.intermediate_size, decoder_ffn_dim=self.intermediate_size, dropout=self.hidden_dropout_prob, attention_dropout=self.attention_probs_dropout_prob, max_position_embeddings=self.max_position_embeddings, eos_token_id=self.eos_token_id, bos_token_id=self.bos_token_id, pad_token_id=self.pad_token_id, ) def prepare_config_and_inputs_for_common(self): config, inputs_dict = self.prepare_config_and_inputs() return config, inputs_dict def create_and_check_decoder_model_past_large_inputs(self, config, inputs_dict): model = MBartModel(config=config).get_decoder().to(torch_device).eval() input_ids = inputs_dict["input_ids"] attention_mask = inputs_dict["attention_mask"] # first forward pass outputs = model(input_ids, attention_mask=attention_mask, use_cache=True) output, past_key_values = outputs.to_tuple() # create hypothetical multiple next token and extent to next_input_ids next_tokens = ids_tensor((self.batch_size, 3), config.vocab_size) next_attn_mask = ids_tensor((self.batch_size, 3), 2) # append to next input_ids and next_input_ids = torch.cat([input_ids, next_tokens], dim=-1) next_attention_mask = torch.cat([attention_mask, next_attn_mask], dim=-1) output_from_no_past = model(next_input_ids, attention_mask=next_attention_mask)["last_hidden_state"] output_from_past = model(next_tokens, attention_mask=next_attention_mask, past_key_values=past_key_values)[ "last_hidden_state" ] # select random slice random_slice_idx = ids_tensor((1,), output_from_past.shape[-1]).item() output_from_no_past_slice = output_from_no_past[:, -3:, random_slice_idx].detach() output_from_past_slice = output_from_past[:, :, random_slice_idx].detach() self.parent.assertTrue(output_from_past_slice.shape[1] == next_tokens.shape[1]) # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3)) def check_encoder_decoder_model_standalone(self, config, inputs_dict): model = MBartModel(config=config).to(torch_device).eval() outputs = model(**inputs_dict) encoder_last_hidden_state = outputs.encoder_last_hidden_state last_hidden_state = outputs.last_hidden_state with tempfile.TemporaryDirectory() as tmpdirname: encoder = model.get_encoder() encoder.save_pretrained(tmpdirname) encoder = MBartEncoder.from_pretrained(tmpdirname).to(torch_device) encoder_last_hidden_state_2 = encoder(inputs_dict["input_ids"], attention_mask=inputs_dict["attention_mask"])[ 0 ] self.parent.assertTrue((encoder_last_hidden_state_2 - encoder_last_hidden_state).abs().max().item() < 1e-3) with tempfile.TemporaryDirectory() as tmpdirname: decoder = model.get_decoder() decoder.save_pretrained(tmpdirname) decoder = MBartDecoder.from_pretrained(tmpdirname).to(torch_device) last_hidden_state_2 = decoder( input_ids=inputs_dict["decoder_input_ids"], attention_mask=inputs_dict["decoder_attention_mask"], encoder_hidden_states=encoder_last_hidden_state, encoder_attention_mask=inputs_dict["attention_mask"], )[0] self.parent.assertTrue((last_hidden_state_2 - last_hidden_state).abs().max().item() < 1e-3) @require_torch class MBartModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = ( (MBartModel, MBartForConditionalGeneration, MBartForSequenceClassification, MBartForQuestionAnswering) if is_torch_available() else () ) pipeline_model_mapping = ( { "feature-extraction": MBartModel, "fill-mask": MBartForConditionalGeneration, "question-answering": MBartForQuestionAnswering, "summarization": MBartForConditionalGeneration, "text-classification": MBartForSequenceClassification, "text-generation": MBartForCausalLM, "text2text-generation": MBartForConditionalGeneration, "translation": MBartForConditionalGeneration, "zero-shot": MBartForSequenceClassification, } if is_torch_available() else {} ) is_encoder_decoder = True fx_compatible = False # Fix me Michael test_pruning = False test_missing_keys = False # TODO: Fix the failed tests def is_pipeline_test_to_skip( self, pipeline_test_case_name, config_class, model_architecture, tokenizer_name, image_processor_name, feature_extractor_name, processor_name, ): if pipeline_test_case_name == "QAPipelineTests" and not tokenizer_name.endswith("Fast"): return True return False def setUp(self): self.model_tester = MBartModelTester(self) self.config_tester = ConfigTester(self, config_class=MBartConfig) def test_config(self): self.config_tester.run_common_tests() def test_save_load_strict(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs() for model_class in self.all_model_classes: model = model_class(config) with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) model2, info = model_class.from_pretrained(tmpdirname, output_loading_info=True) self.assertEqual(info["missing_keys"], []) def test_decoder_model_past_with_large_inputs(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_decoder_model_past_large_inputs(*config_and_inputs) def test_encoder_decoder_model_standalone(self): config_and_inputs = self.model_tester.prepare_config_and_inputs_for_common() self.model_tester.check_encoder_decoder_model_standalone(*config_and_inputs) # MBartForSequenceClassification does not support inputs_embeds def test_inputs_embeds(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in (MBartModel, MBartForConditionalGeneration, MBartForQuestionAnswering): model = model_class(config) model.to(torch_device) model.eval() inputs = copy.deepcopy(self._prepare_for_class(inputs_dict, model_class)) if not self.is_encoder_decoder: input_ids = inputs["input_ids"] del inputs["input_ids"] else: encoder_input_ids = inputs["input_ids"] decoder_input_ids = inputs.get("decoder_input_ids", encoder_input_ids) del inputs["input_ids"] inputs.pop("decoder_input_ids", None) wte = model.get_input_embeddings() if not self.is_encoder_decoder: inputs["inputs_embeds"] = wte(input_ids) else: inputs["inputs_embeds"] = wte(encoder_input_ids) inputs["decoder_inputs_embeds"] = wte(decoder_input_ids) with torch.no_grad(): model(**inputs)[0] @require_torch_fp16 def test_generate_fp16(self): config, input_dict = self.model_tester.prepare_config_and_inputs() input_ids = input_dict["input_ids"] attention_mask = input_ids.ne(1).to(torch_device) model = MBartForConditionalGeneration(config).eval().to(torch_device) model.half() model.generate(input_ids, attention_mask=attention_mask) model.generate(num_beams=4, do_sample=True, early_stopping=False, num_return_sequences=3) def test_ensure_weights_are_shared(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs() config.tie_word_embeddings = True model = MBartForConditionalGeneration(config) # MBart shares four weights. # Not an issue to not have these correctly tied for torch.load, but it is an issue for safetensors. self.assertEqual( len( { model.get_output_embeddings().weight.data_ptr(), model.get_input_embeddings().weight.data_ptr(), model.base_model.decoder.embed_tokens.weight.data_ptr(), model.base_model.encoder.embed_tokens.weight.data_ptr(), } ), 1, ) config.tie_word_embeddings = False model = MBartForConditionalGeneration(config) # MBart shares four weights. # Not an issue to not have these correctly tied for torch.load, but it is an issue for safetensors. self.assertEqual( len( { model.get_output_embeddings().weight.data_ptr(), model.get_input_embeddings().weight.data_ptr(), model.base_model.decoder.embed_tokens.weight.data_ptr(), model.base_model.encoder.embed_tokens.weight.data_ptr(), } ), 2, ) @unittest.skip( reason="This architecture has tied weights by default and there is no way to remove it, check: https://github.com/huggingface/transformers/pull/31771#issuecomment-2210915245" ) def test_load_save_without_tied_weights(self): pass def test_resize_embeddings_persists_embeddings_type(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs() config.scale_embedding = True model = MBartForConditionalGeneration(config) old_type = type(model.model.decoder.embed_tokens) model.resize_token_embeddings(new_num_tokens=config.vocab_size) new_type = type(model.model.decoder.embed_tokens) self.assertIs(old_type, new_type) def assert_tensors_close(a, b, atol=1e-12, prefix=""): """If tensors have different shapes, different values or a and b are not both tensors, raise a nice Assertion error.""" if a is None and b is None: return True try: if torch.allclose(a, b, atol=atol): return True raise except Exception: pct_different = (torch.gt((a - b).abs(), atol)).float().mean().item() if a.numel() > 100: msg = f"tensor values are {pct_different:.1%} percent different." else: msg = f"{a} != {b}" if prefix: msg = prefix + ": " + msg raise AssertionError(msg) def _long_tensor(tok_lst): return torch.tensor(tok_lst, dtype=torch.long, device=torch_device) @require_torch @require_sentencepiece @require_tokenizers class AbstractSeq2SeqIntegrationTest(unittest.TestCase): maxDiff = 1000 # longer string compare tracebacks checkpoint_name = None @classmethod def setUpClass(cls): cls.tokenizer = AutoTokenizer.from_pretrained(cls.checkpoint_name, use_fast=False) return cls @cached_property def model(self): """Only load the model if needed.""" model = MBartForConditionalGeneration.from_pretrained(self.checkpoint_name).to(torch_device) if "cuda" in torch_device: model = model.half() return model @require_torch @require_sentencepiece @require_tokenizers class MBartEnroIntegrationTest(AbstractSeq2SeqIntegrationTest): checkpoint_name = "facebook/mbart-large-en-ro" src_text = [ " UN Chief Says There Is No Military Solution in Syria", """ Secretary-General Ban Ki-moon says his response to Russia's stepped up military support for Syria is that "there is no military solution" to the nearly five-year conflict and more weapons will only worsen the violence and misery for millions of people.""", ] tgt_text = [ "Şeful ONU declară că nu există o soluţie militară în Siria", "Secretarul General Ban Ki-moon declară că răspunsul său la intensificarea sprijinului militar al Rusiei" ' pentru Siria este că "nu există o soluţie militară" la conflictul de aproape cinci ani şi că noi arme nu vor' " face decât să înrăutăţească violenţa şi mizeria pentru milioane de oameni.", ] expected_src_tokens = [8274, 127873, 25916, 7, 8622, 2071, 438, 67485, 53, 187895, 23, 51712, 2, 250004] @slow def test_enro_generate_one(self): batch: BatchEncoding = self.tokenizer( ["UN Chief Says There Is No Military Solution in Syria"], return_tensors="pt" ).to(torch_device) translated_tokens = self.model.generate(**batch) decoded = self.tokenizer.batch_decode(translated_tokens, skip_special_tokens=True) self.assertEqual(self.tgt_text[0], decoded[0]) # self.assertEqual(self.tgt_text[1], decoded[1]) @slow def test_enro_generate_batch(self): batch: BatchEncoding = self.tokenizer(self.src_text, return_tensors="pt", padding=True, truncation=True).to( torch_device ) translated_tokens = self.model.generate(**batch) decoded = self.tokenizer.batch_decode(translated_tokens, skip_special_tokens=True) assert self.tgt_text == decoded def test_mbart_enro_config(self): mbart_models = ["facebook/mbart-large-en-ro"] expected = {"scale_embedding": True, "output_past": True} for name in mbart_models: config = MBartConfig.from_pretrained(name) for k, v in expected.items(): try: self.assertEqual(v, getattr(config, k)) except AssertionError as e: e.args += (name, k) raise def test_mbart_fast_forward(self): config = MBartConfig( vocab_size=99, d_model=24, encoder_layers=2, decoder_layers=2, encoder_attention_heads=2, decoder_attention_heads=2, encoder_ffn_dim=32, decoder_ffn_dim=32, max_position_embeddings=48, add_final_layer_norm=True, ) lm_model = MBartForConditionalGeneration(config).to(torch_device) context = torch.tensor( [[71, 82, 18, 33, 46, 91, 2], [68, 34, 26, 58, 30, 2, 1]], device=torch_device, dtype=torch.long ) summary = torch.tensor([[82, 71, 82, 18, 2], [58, 68, 2, 1, 1]], device=torch_device, dtype=torch.long) result = lm_model(input_ids=context, decoder_input_ids=summary, labels=summary) expected_shape = (*summary.shape, config.vocab_size) self.assertEqual(result.logits.shape, expected_shape) @require_torch @require_sentencepiece @require_tokenizers class MBartCC25IntegrationTest(AbstractSeq2SeqIntegrationTest): checkpoint_name = "facebook/mbart-large-cc25" src_text = [ " UN Chief Says There Is No Military Solution in Syria", " I ate lunch twice yesterday", ] tgt_text = ["Şeful ONU declară că nu există o soluţie militară în Siria", "to be padded"] @unittest.skip(reason="This test is broken, still generates english") def test_cc25_generate(self): inputs = self.tokenizer([self.src_text[0]], return_tensors="pt").to(torch_device) translated_tokens = self.model.generate( input_ids=inputs["input_ids"].to(torch_device), decoder_start_token_id=self.tokenizer.lang_code_to_id["ro_RO"], ) decoded = self.tokenizer.batch_decode(translated_tokens, skip_special_tokens=True) self.assertEqual(self.tgt_text[0], decoded[0]) @slow def test_fill_mask(self): inputs = self.tokenizer(["One of the best <mask> I ever read!"], return_tensors="pt").to(torch_device) outputs = self.model.generate( inputs["input_ids"], decoder_start_token_id=self.tokenizer.lang_code_to_id["en_XX"], num_beams=1 ) prediction: str = self.tokenizer.batch_decode( outputs, clean_up_tokenization_spaces=True, skip_special_tokens=True )[0] self.assertEqual(prediction, "of the best books I ever read!") class MBartStandaloneDecoderModelTester: def __init__( self, parent, vocab_size=99, batch_size=13, d_model=16, decoder_seq_length=7, is_training=True, is_decoder=True, use_attention_mask=True, use_cache=False, use_labels=True, decoder_start_token_id=2, decoder_ffn_dim=32, decoder_layers=2, encoder_attention_heads=4, decoder_attention_heads=4, max_position_embeddings=50, is_encoder_decoder=False, pad_token_id=0, bos_token_id=1, eos_token_id=2, scope=None, ): self.parent = parent self.batch_size = batch_size self.decoder_seq_length = decoder_seq_length # For common tests self.seq_length = self.decoder_seq_length self.is_training = is_training self.use_attention_mask = use_attention_mask self.use_labels = use_labels self.vocab_size = vocab_size self.d_model = d_model self.hidden_size = d_model self.num_hidden_layers = decoder_layers self.decoder_layers = decoder_layers self.decoder_ffn_dim = decoder_ffn_dim self.encoder_attention_heads = encoder_attention_heads self.decoder_attention_heads = decoder_attention_heads self.num_attention_heads = decoder_attention_heads self.eos_token_id = eos_token_id self.bos_token_id = bos_token_id self.pad_token_id = pad_token_id self.decoder_start_token_id = decoder_start_token_id self.use_cache = use_cache self.max_position_embeddings = max_position_embeddings self.is_encoder_decoder = is_encoder_decoder self.scope = None self.decoder_key_length = decoder_seq_length self.base_model_out_len = 2 self.decoder_attention_idx = 1 def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.decoder_seq_length], self.vocab_size) attention_mask = None if self.use_attention_mask: attention_mask = ids_tensor([self.batch_size, self.decoder_seq_length], vocab_size=2) lm_labels = None if self.use_labels: lm_labels = ids_tensor([self.batch_size, self.decoder_seq_length], self.vocab_size) config = MBartConfig( vocab_size=self.vocab_size, d_model=self.d_model, decoder_layers=self.decoder_layers, num_hidden_layers=self.decoder_layers, decoder_ffn_dim=self.decoder_ffn_dim, encoder_attention_heads=self.encoder_attention_heads, decoder_attention_heads=self.decoder_attention_heads, eos_token_id=self.eos_token_id, bos_token_id=self.bos_token_id, use_cache=self.use_cache, pad_token_id=self.pad_token_id, decoder_start_token_id=self.decoder_start_token_id, max_position_embeddings=self.max_position_embeddings, is_encoder_decoder=self.is_encoder_decoder, ) return ( config, input_ids, attention_mask, lm_labels, ) def create_and_check_decoder_model_past( self, config, input_ids, attention_mask, lm_labels, ): config.use_cache = True model = MBartDecoder(config=config).to(torch_device).eval() # first forward pass outputs = model(input_ids, use_cache=True) outputs_use_cache_conf = model(input_ids) outputs_no_past = model(input_ids, use_cache=False) self.parent.assertTrue(len(outputs) == len(outputs_use_cache_conf)) self.parent.assertTrue(len(outputs) == len(outputs_no_past) + 1) past_key_values = outputs["past_key_values"] # create hypothetical next token and extent to next_input_ids next_tokens = ids_tensor((self.batch_size, 1), config.vocab_size) # append to next input_ids and next_input_ids = torch.cat([input_ids, next_tokens], dim=-1) output_from_no_past = model(next_input_ids)["last_hidden_state"] output_from_past = model(next_tokens, past_key_values=past_key_values)["last_hidden_state"] # select random slice random_slice_idx = ids_tensor((1,), output_from_past.shape[-1]).item() output_from_no_past_slice = output_from_no_past[:, next_input_ids.shape[-1] - 1, random_slice_idx].detach() output_from_past_slice = output_from_past[:, 0, random_slice_idx].detach() # test that outputs are equal for slice assert torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3) def create_and_check_decoder_model_attention_mask_past( self, config, input_ids, attention_mask, lm_labels, ): model = MBartDecoder(config=config).to(torch_device).eval() # create attention mask attn_mask = torch.ones(input_ids.shape, dtype=torch.long, device=torch_device) half_seq_length = input_ids.shape[-1] // 2 attn_mask[:, half_seq_length:] = 0 # first forward pass past_key_values = model(input_ids, attention_mask=attn_mask, use_cache=True)["past_key_values"] # create hypothetical next token and extent to next_input_ids next_tokens = ids_tensor((self.batch_size, 1), config.vocab_size) # change a random masked slice from input_ids random_seq_idx_to_change = ids_tensor((1,), half_seq_length).item() + 1 random_other_next_tokens = ids_tensor((self.batch_size, 1), config.vocab_size).squeeze(-1) input_ids[:, -random_seq_idx_to_change] = random_other_next_tokens # append to next input_ids and attn_mask next_input_ids = torch.cat([input_ids, next_tokens], dim=-1) attn_mask = torch.cat( [attn_mask, torch.ones((attn_mask.shape[0], 1), dtype=torch.long, device=torch_device)], dim=1, ) # get two different outputs output_from_no_past = model(next_input_ids, attention_mask=attn_mask)["last_hidden_state"] output_from_past = model( next_tokens, attention_mask=attn_mask, past_key_values=past_key_values, use_cache=True )["last_hidden_state"] # select random slice random_slice_idx = ids_tensor((1,), output_from_past.shape[-1]).item() output_from_no_past_slice = output_from_no_past[:, next_input_ids.shape[-1] - 1, random_slice_idx].detach() output_from_past_slice = output_from_past[:, 0, random_slice_idx].detach() # test that outputs are equal for slice assert torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-3) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, input_ids, attention_mask, lm_labels, ) = config_and_inputs inputs_dict = { "input_ids": input_ids, "attention_mask": attention_mask, } return config, inputs_dict @require_torch class MBartStandaloneDecoderModelTest(ModelTesterMixin, GenerationTesterMixin, unittest.TestCase): all_model_classes = (MBartDecoder, MBartForCausalLM) if is_torch_available() else () test_pruning = False is_encoder_decoder = False def setUp( self, ): self.model_tester = MBartStandaloneDecoderModelTester(self, is_training=False) self.config_tester = ConfigTester(self, config_class=MBartConfig) def test_config(self): self.config_tester.run_common_tests() def test_decoder_model_past(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_decoder_model_past(*config_and_inputs) def test_decoder_model_attn_mask_past(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_decoder_model_attention_mask_past(*config_and_inputs) @unittest.skip(reason="Decoder cannot retain gradients") def test_retain_grad_hidden_states_attentions(self): return @unittest.skip(reason="Decoder cannot retain gradients") def test_flex_attention_with_grads(self): return
transformers/tests/models/mbart/test_modeling_mbart.py/0
{ "file_path": "transformers/tests/models/mbart/test_modeling_mbart.py", "repo_id": "transformers", "token_count": 13132 }
592
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from functools import lru_cache from transformers.models.mluke.tokenization_mluke import MLukeTokenizer from transformers.testing_utils import get_tests_dir, require_torch, slow from ...test_tokenization_common import TokenizerTesterMixin, use_cache_if_possible SAMPLE_VOCAB = get_tests_dir("fixtures/test_sentencepiece.model") SAMPLE_ENTITY_VOCAB = get_tests_dir("fixtures/test_entity_vocab.json") class MLukeTokenizerTest(TokenizerTesterMixin, unittest.TestCase): from_pretrained_id = "studio-ousia/mluke-base" tokenizer_class = MLukeTokenizer test_rust_tokenizer = False from_pretrained_kwargs = {"cls_token": "<s>"} @classmethod def setUpClass(cls): super().setUpClass() cls.special_tokens_map = {"entity_token_1": "<ent>", "entity_token_2": "<ent2>"} @classmethod @use_cache_if_possible @lru_cache(maxsize=64) def get_tokenizer(cls, task=None, **kwargs): kwargs.update(cls.special_tokens_map) kwargs.update({"task": task}) tokenizer = MLukeTokenizer(vocab_file=SAMPLE_VOCAB, entity_vocab_file=SAMPLE_ENTITY_VOCAB, **kwargs) return tokenizer def get_input_output_texts(self, tokenizer): input_text = "lower newer" output_text = "lower newer" return input_text, output_text def test_full_tokenizer(self): tokenizer = self.get_tokenizer() text = "lower newer" spm_tokens = ["▁l", "ow", "er", "▁new", "er"] tokens = tokenizer.tokenize(text) self.assertListEqual(tokens, spm_tokens) input_tokens = tokens + [tokenizer.unk_token] input_spm_tokens = [149, 116, 40, 410, 40] + [3] self.assertListEqual(tokenizer.convert_tokens_to_ids(input_tokens), input_spm_tokens) def mluke_dict_integration_testing(self): tokenizer = self.get_tokenizer() self.assertListEqual(tokenizer.encode("Hello world!", add_special_tokens=False), [35378, 8999, 38]) self.assertListEqual( tokenizer.encode("Hello world! cécé herlolip 418", add_special_tokens=False), [35378, 8999, 38, 33273, 11676, 604, 365, 21392, 201, 1819], ) def test_sequence_builders(self): tokenizer = self.tokenizer_class.from_pretrained("hf-internal-testing/tiny-random-mluke") text = tokenizer.encode("sequence builders", add_special_tokens=False) text_2 = tokenizer.encode("multi-sequence build", add_special_tokens=False) encoded_text_from_decode = tokenizer.encode( "sequence builders", add_special_tokens=True, add_prefix_space=False ) encoded_pair_from_decode = tokenizer.encode( "sequence builders", "multi-sequence build", add_special_tokens=True, add_prefix_space=False ) encoded_sentence = tokenizer.build_inputs_with_special_tokens(text) encoded_pair = tokenizer.build_inputs_with_special_tokens(text, text_2) self.assertEqual(encoded_sentence, encoded_text_from_decode) self.assertEqual(encoded_pair, encoded_pair_from_decode) def get_clean_sequence(self, tokenizer, max_length=20) -> tuple[str, list]: txt = "Beyonce lives in Los Angeles" ids = tokenizer.encode(txt, add_special_tokens=False) return txt, ids @unittest.skip def test_pretokenized_inputs(self): pass def test_embeded_special_tokens(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest(f"{tokenizer.__class__.__name__} ({pretrained_name})"): tokenizer_r = self.get_rust_tokenizer(pretrained_name, **kwargs) tokenizer_p = self.get_tokenizer(pretrained_name, **kwargs) sentence = "A, <mask> AllenNLP sentence." tokens_r = tokenizer_r.encode_plus(sentence, add_special_tokens=True, return_token_type_ids=True) tokens_p = tokenizer_p.encode_plus(sentence, add_special_tokens=True, return_token_type_ids=True) # token_type_ids should put 0 everywhere self.assertEqual(sum(tokens_r["token_type_ids"]), sum(tokens_p["token_type_ids"])) # attention_mask should put 1 everywhere, so sum over length should be 1 self.assertEqual( sum(tokens_r["attention_mask"]) / len(tokens_r["attention_mask"]), sum(tokens_p["attention_mask"]) / len(tokens_p["attention_mask"]), ) tokens_p_str = tokenizer_p.convert_ids_to_tokens(tokens_p["input_ids"]) # Rust correctly handles the space before the mask while python doesn't self.assertSequenceEqual(tokens_p["input_ids"], [0, 250, 6, 50264, 3823, 487, 21992, 3645, 4, 2]) self.assertSequenceEqual( tokens_p_str, ["<s>", "A", ",", "<mask>", "ĠAllen", "N", "LP", "Ġsentence", ".", "</s>"] ) def test_padding_entity_inputs(self): tokenizer = self.get_tokenizer() sentence = "Japanese is an East Asian language spoken by about 128 million people, primarily in Japan." span = (15, 34) pad_id = tokenizer.entity_vocab["[PAD]"] mask_id = tokenizer.entity_vocab["[MASK]"] encoding = tokenizer([sentence, sentence], entity_spans=[[span], [span, span]], padding=True) self.assertEqual(encoding["entity_ids"], [[mask_id, pad_id], [mask_id, mask_id]]) # test with a sentence with no entity encoding = tokenizer([sentence, sentence], entity_spans=[[], [span, span]], padding=True) self.assertEqual(encoding["entity_ids"], [[pad_id, pad_id], [mask_id, mask_id]]) def test_if_tokenize_single_text_raise_error_with_invalid_inputs(self): tokenizer = self.get_tokenizer() sentence = "ISO 639-3 uses the code fas for the dialects spoken across Iran and Afghanistan." entities = ["DUMMY"] spans = [(0, 9)] with self.assertRaises(ValueError): tokenizer(sentence, entities=tuple(entities), entity_spans=spans) with self.assertRaises(TypeError): tokenizer(sentence, entities=entities, entity_spans=tuple(spans)) with self.assertRaises(ValueError): tokenizer(sentence, entities=[0], entity_spans=spans) with self.assertRaises(ValueError): tokenizer(sentence, entities=entities, entity_spans=[0]) with self.assertRaises(ValueError): tokenizer(sentence, entities=entities, entity_spans=spans + [(0, 9)]) def test_if_tokenize_entity_classification_raise_error_with_invalid_inputs(self): tokenizer = self.get_tokenizer(task="entity_classification") sentence = "Japanese is an East Asian language spoken by about 128 million people, primarily in Japan." span = (15, 34) with self.assertRaises(ValueError): tokenizer(sentence, entity_spans=[]) with self.assertRaises(ValueError): tokenizer(sentence, entity_spans=[span, span]) with self.assertRaises(ValueError): tokenizer(sentence, entity_spans=[0]) def test_if_tokenize_entity_pair_classification_raise_error_with_invalid_inputs(self): tokenizer = self.get_tokenizer(task="entity_pair_classification") sentence = "Japanese is an East Asian language spoken by about 128 million people, primarily in Japan." # head and tail information with self.assertRaises(ValueError): tokenizer(sentence, entity_spans=[]) with self.assertRaises(ValueError): tokenizer(sentence, entity_spans=[0, 0]) def test_if_tokenize_entity_span_classification_raise_error_with_invalid_inputs(self): tokenizer = self.get_tokenizer(task="entity_span_classification") sentence = "Japanese is an East Asian language spoken by about 128 million people, primarily in Japan." with self.assertRaises(ValueError): tokenizer(sentence, entity_spans=[]) with self.assertRaises(ValueError): tokenizer(sentence, entity_spans=[0, 0, 0]) @slow @require_torch class MLukeTokenizerIntegrationTests(unittest.TestCase): tokenizer_class = MLukeTokenizer from_pretrained_kwargs = {"cls_token": "<s>"} @classmethod def setUpClass(cls): cls.tokenizer = MLukeTokenizer.from_pretrained("studio-ousia/mluke-base", return_token_type_ids=True) cls.entity_classification_tokenizer = MLukeTokenizer.from_pretrained( "studio-ousia/mluke-base", return_token_type_ids=True, task="entity_classification" ) cls.entity_pair_tokenizer = MLukeTokenizer.from_pretrained( "studio-ousia/mluke-base", return_token_type_ids=True, task="entity_pair_classification" ) cls.entity_span_tokenizer = MLukeTokenizer.from_pretrained( "studio-ousia/mluke-base", return_token_type_ids=True, task="entity_span_classification" ) def test_single_text_no_padding_or_truncation(self): tokenizer = self.tokenizer sentence = "ISO 639-3 uses the code fas for the dialects spoken across Iran and アフガニスタン (Afghanistan)." entities = ["en:ISO 639-3", "DUMMY_ENTITY", "ja:アフガニスタン", "en:Afghanistan"] spans = [(0, 9), (59, 63), (68, 75), (77, 88)] encoding = tokenizer(sentence, entities=entities, entity_spans=spans, return_token_type_ids=True) self.assertEqual( tokenizer.decode(encoding["input_ids"], spaces_between_special_tokens=False), "<s> ISO 639-3 uses the code fas for the dialects spoken across Iran and アフガニスタン ( Afghanistan ).</s>", ) self.assertEqual( tokenizer.decode(encoding["input_ids"][1:5], spaces_between_special_tokens=False), "ISO 639-3" ) self.assertEqual(tokenizer.decode(encoding["input_ids"][17], spaces_between_special_tokens=False), "Iran") self.assertEqual( tokenizer.decode(encoding["input_ids"][19:25], spaces_between_special_tokens=False), "アフガニスタン" ) self.assertEqual( tokenizer.decode(encoding["input_ids"][26], spaces_between_special_tokens=False), "Afghanistan" ) self.assertEqual( encoding["entity_ids"], [ tokenizer.entity_vocab["en:ISO 639-3"], tokenizer.entity_vocab["[UNK]"], tokenizer.entity_vocab["ja:アフガニスタン"], tokenizer.entity_vocab["en:Afghanistan"], ], ) self.assertEqual(encoding["entity_attention_mask"], [1, 1, 1, 1]) self.assertEqual(encoding["entity_token_type_ids"], [0, 0, 0, 0]) # fmt: off self.assertEqual( encoding["entity_position_ids"], [ [1, 2, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [17, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [19, 20, 21, 22, 23, 24, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [26, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] ] ) # fmt: on def test_single_text_only_entity_spans_no_padding_or_truncation(self): tokenizer = self.tokenizer sentence = "ISO 639-3 uses the code fas for the dialects spoken across Iran and アフガニスタン (Afghanistan)." entities = ["en:ISO 639-3", "DUMMY_ENTITY", "ja:アフガニスタン", "en:Afghanistan"] spans = [(0, 9), (59, 63), (68, 75), (77, 88)] encoding = tokenizer(sentence, entities=entities, entity_spans=spans, return_token_type_ids=True) self.assertEqual( tokenizer.decode(encoding["input_ids"], spaces_between_special_tokens=False), "<s> ISO 639-3 uses the code fas for the dialects spoken across Iran and アフガニスタン ( Afghanistan ).</s>", ) self.assertEqual( tokenizer.decode(encoding["input_ids"][1:5], spaces_between_special_tokens=False), "ISO 639-3" ) self.assertEqual(tokenizer.decode(encoding["input_ids"][17], spaces_between_special_tokens=False), "Iran") self.assertEqual( tokenizer.decode(encoding["input_ids"][20:25], spaces_between_special_tokens=False), "アフガニスタン" ) self.assertEqual( tokenizer.decode(encoding["input_ids"][26], spaces_between_special_tokens=False), "Afghanistan" ) self.assertEqual( encoding["entity_ids"], [ tokenizer.entity_vocab["en:ISO 639-3"], tokenizer.entity_vocab["[UNK]"], tokenizer.entity_vocab["ja:アフガニスタン"], tokenizer.entity_vocab["en:Afghanistan"], ], ) self.assertEqual(encoding["entity_attention_mask"], [1, 1, 1, 1]) self.assertEqual(encoding["entity_token_type_ids"], [0, 0, 0, 0]) # fmt: off self.assertEqual( encoding["entity_position_ids"], [ [1, 2, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [17, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [19, 20, 21, 22, 23, 24, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [26, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] ] ) # fmt: on def test_single_text_padding_pytorch_tensors(self): tokenizer = self.tokenizer sentence = "ISO 639-3 uses the code fas for the dialects spoken across Iran and アフガニスタン (Afghanistan)." entities = ["en:ISO 639-3", "DUMMY_ENTITY", "ja:アフガニスタン", "en:Afghanistan"] spans = [(0, 9), (59, 63), (68, 75), (77, 88)] encoding = tokenizer( sentence, entities=entities, entity_spans=spans, return_token_type_ids=True, padding="max_length", max_length=30, max_entity_length=16, return_tensors="pt", ) # test words self.assertEqual(encoding["input_ids"].shape, (1, 30)) self.assertEqual(encoding["attention_mask"].shape, (1, 30)) self.assertEqual(encoding["token_type_ids"].shape, (1, 30)) # test entities self.assertEqual(encoding["entity_ids"].shape, (1, 16)) self.assertEqual(encoding["entity_attention_mask"].shape, (1, 16)) self.assertEqual(encoding["entity_token_type_ids"].shape, (1, 16)) self.assertEqual(encoding["entity_position_ids"].shape, (1, 16, tokenizer.max_mention_length)) def test_text_pair_no_padding_or_truncation(self): tokenizer = self.tokenizer sentence = "ISO 639-3 uses the code fas" sentence_pair = "for the dialects spoken across Iran and アフガニスタン (Afghanistan)." entities = ["en:ISO 639-3"] entities_pair = ["DUMMY_ENTITY", "ja:アフガニスタン", "en:Afghanistan"] spans = [(0, 9)] spans_pair = [(31, 35), (40, 47), (49, 60)] encoding = tokenizer( sentence, sentence_pair, entities=entities, entities_pair=entities_pair, entity_spans=spans, entity_spans_pair=spans_pair, return_token_type_ids=True, ) self.assertEqual( tokenizer.decode(encoding["input_ids"], spaces_between_special_tokens=False), "<s> ISO 639-3 uses the code fas</s></s> for the dialects spoken across Iran and アフガニスタン ( Afghanistan" " ).</s>", ) self.assertEqual( tokenizer.decode(encoding["input_ids"][1:5], spaces_between_special_tokens=False), "ISO 639-3" ) self.assertEqual(tokenizer.decode(encoding["input_ids"][19], spaces_between_special_tokens=False), "Iran") self.assertEqual( tokenizer.decode(encoding["input_ids"][21:27], spaces_between_special_tokens=False), "アフガニスタン" ) self.assertEqual( tokenizer.decode(encoding["input_ids"][28], spaces_between_special_tokens=False), "Afghanistan" ) self.assertEqual( encoding["entity_ids"], [ tokenizer.entity_vocab["en:ISO 639-3"], tokenizer.entity_vocab["[UNK]"], tokenizer.entity_vocab["ja:アフガニスタン"], tokenizer.entity_vocab["en:Afghanistan"], ], ) self.assertEqual(encoding["entity_attention_mask"], [1, 1, 1, 1]) self.assertEqual(encoding["entity_token_type_ids"], [0, 0, 0, 0]) # fmt: off self.assertEqual( encoding["entity_position_ids"], [ [1, 2, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [19, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [21, 22, 23, 24, 25, 26, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [28, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] ] ) # fmt: on def test_text_pair_only_entity_spans_no_padding_or_truncation(self): tokenizer = self.tokenizer sentence = "ISO 639-3 uses the code fas" sentence_pair = "for the dialects spoken across Iran and アフガニスタン (Afghanistan)." entities = ["en:ISO 639-3"] entities_pair = ["DUMMY_ENTITY", "ja:アフガニスタン", "en:Afghanistan"] spans = [(0, 9)] spans_pair = [(31, 35), (40, 47), (49, 60)] encoding = tokenizer( sentence, sentence_pair, entities=entities, entities_pair=entities_pair, entity_spans=spans, entity_spans_pair=spans_pair, return_token_type_ids=True, ) self.assertEqual( tokenizer.decode(encoding["input_ids"], spaces_between_special_tokens=False), "<s> ISO 639-3 uses the code fas</s></s> for the dialects spoken across Iran and アフガニスタン ( Afghanistan" " ).</s>", ) self.assertEqual( tokenizer.decode(encoding["input_ids"][1:5], spaces_between_special_tokens=False), "ISO 639-3" ) self.assertEqual(tokenizer.decode(encoding["input_ids"][19], spaces_between_special_tokens=False), "Iran") self.assertEqual( tokenizer.decode(encoding["input_ids"][21:27], spaces_between_special_tokens=False), "アフガニスタン" ) self.assertEqual( tokenizer.decode(encoding["input_ids"][28], spaces_between_special_tokens=False), "Afghanistan" ) self.assertEqual( encoding["entity_ids"], [ tokenizer.entity_vocab["en:ISO 639-3"], tokenizer.entity_vocab["[UNK]"], tokenizer.entity_vocab["ja:アフガニスタン"], tokenizer.entity_vocab["en:Afghanistan"], ], ) # fmt: off self.assertEqual( encoding["entity_position_ids"], [ [1, 2, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [19, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [21, 22, 23, 24, 25, 26, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [28, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] ] ) # fmt: on def test_text_pair_padding_pytorch_tensors(self): tokenizer = self.tokenizer sentence = "ISO 639-3 uses the code fas" sentence_pair = "for the dialects spoken across Iran and アフガニスタン (Afghanistan)." entities = ["en:ISO 639-3"] entities_pair = ["DUMMY_ENTITY", "ja:アフガニスタン", "en:Afghanistan"] spans = [(0, 9)] spans_pair = [(31, 35), (40, 47), (49, 60)] encoding = tokenizer( sentence, sentence_pair, entities=entities, entities_pair=entities_pair, entity_spans=spans, entity_spans_pair=spans_pair, return_token_type_ids=True, padding="max_length", max_length=40, max_entity_length=16, return_tensors="pt", ) # test words self.assertEqual(encoding["input_ids"].shape, (1, 40)) self.assertEqual(encoding["attention_mask"].shape, (1, 40)) self.assertEqual(encoding["token_type_ids"].shape, (1, 40)) # test entities self.assertEqual(encoding["entity_ids"].shape, (1, 16)) self.assertEqual(encoding["entity_attention_mask"].shape, (1, 16)) self.assertEqual(encoding["entity_token_type_ids"].shape, (1, 16)) self.assertEqual(encoding["entity_position_ids"].shape, (1, 16, tokenizer.max_mention_length)) def test_entity_classification_no_padding_or_truncation(self): tokenizer = self.entity_classification_tokenizer sentence = "Japanese is an East Asian language spoken by about 128 million people, primarily in Japan." span = (15, 34) encoding = tokenizer(sentence, entity_spans=[span], return_token_type_ids=True) # test words self.assertEqual(len(encoding["input_ids"]), 23) self.assertEqual(len(encoding["attention_mask"]), 23) self.assertEqual(len(encoding["token_type_ids"]), 23) self.assertEqual( tokenizer.decode(encoding["input_ids"], spaces_between_special_tokens=False), "<s> Japanese is an<ent>East Asian language<ent>spoken by about 128 million people, primarily in" " Japan.</s>", ) self.assertEqual( tokenizer.decode(encoding["input_ids"][4:9], spaces_between_special_tokens=False), "<ent>East Asian language<ent>", ) # test entities mask_id = tokenizer.entity_vocab["[MASK]"] self.assertEqual(encoding["entity_ids"], [mask_id]) self.assertEqual(encoding["entity_attention_mask"], [1]) self.assertEqual(encoding["entity_token_type_ids"], [0]) # fmt: off self.assertEqual( encoding["entity_position_ids"], [[4, 5, 6, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]] ) # fmt: on def test_entity_classification_padding_pytorch_tensors(self): tokenizer = self.entity_classification_tokenizer sentence = "Japanese is an East Asian language spoken by about 128 million people, primarily in Japan." span = (15, 34) encoding = tokenizer( sentence, entity_spans=[span], return_token_type_ids=True, padding="max_length", return_tensors="pt" ) # test words self.assertEqual(encoding["input_ids"].shape, (1, 512)) self.assertEqual(encoding["attention_mask"].shape, (1, 512)) self.assertEqual(encoding["token_type_ids"].shape, (1, 512)) # test entities self.assertEqual(encoding["entity_ids"].shape, (1, 1)) self.assertEqual(encoding["entity_attention_mask"].shape, (1, 1)) self.assertEqual(encoding["entity_token_type_ids"].shape, (1, 1)) self.assertEqual( encoding["entity_position_ids"].shape, (1, tokenizer.max_entity_length, tokenizer.max_mention_length) ) def test_entity_pair_classification_no_padding_or_truncation(self): tokenizer = self.entity_pair_tokenizer sentence = "Japanese is an East Asian language spoken by about 128 million people, primarily in Japan." # head and tail information spans = [(0, 8), (84, 89)] encoding = tokenizer(sentence, entity_spans=spans, return_token_type_ids=True) self.assertEqual( tokenizer.decode(encoding["input_ids"], spaces_between_special_tokens=False), "<s><ent>Japanese<ent>is an East Asian language spoken by about 128 million people, primarily" " in<ent2>Japan<ent2>.</s>", ) self.assertEqual( tokenizer.decode(encoding["input_ids"][1:4], spaces_between_special_tokens=False), "<ent>Japanese<ent>", ) self.assertEqual( tokenizer.decode(encoding["input_ids"][20:23], spaces_between_special_tokens=False), "<ent2>Japan<ent2>" ) mask_id = tokenizer.entity_vocab["[MASK]"] mask2_id = tokenizer.entity_vocab["[MASK2]"] self.assertEqual(encoding["entity_ids"], [mask_id, mask2_id]) self.assertEqual(encoding["entity_attention_mask"], [1, 1]) self.assertEqual(encoding["entity_token_type_ids"], [0, 0]) # fmt: off self.assertEqual( encoding["entity_position_ids"], [ [1, 2, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [20, 21, 22, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] ] ) # fmt: on def test_entity_pair_classification_padding_pytorch_tensors(self): tokenizer = self.entity_pair_tokenizer sentence = "Japanese is an East Asian language spoken by about 128 million people, primarily in Japan." # head and tail information spans = [(0, 8), (84, 89)] encoding = tokenizer( sentence, entity_spans=spans, return_token_type_ids=True, padding="max_length", max_length=30, return_tensors="pt", ) # test words self.assertEqual(encoding["input_ids"].shape, (1, 30)) self.assertEqual(encoding["attention_mask"].shape, (1, 30)) self.assertEqual(encoding["token_type_ids"].shape, (1, 30)) # test entities self.assertEqual(encoding["entity_ids"].shape, (1, 2)) self.assertEqual(encoding["entity_attention_mask"].shape, (1, 2)) self.assertEqual(encoding["entity_token_type_ids"].shape, (1, 2)) self.assertEqual( encoding["entity_position_ids"].shape, (1, tokenizer.max_entity_length, tokenizer.max_mention_length) ) def test_entity_span_classification_no_padding_or_truncation(self): tokenizer = self.entity_span_tokenizer sentence = "Japanese is an East Asian language spoken by about 128 million people, primarily in Japan." spans = [(0, 8), (15, 34), (84, 89)] encoding = tokenizer(sentence, entity_spans=spans, return_token_type_ids=True) self.assertEqual( tokenizer.decode(encoding["input_ids"], spaces_between_special_tokens=False), "<s> Japanese is an East Asian language spoken by about 128 million people, primarily in Japan.</s>", ) mask_id = tokenizer.entity_vocab["[MASK]"] self.assertEqual(encoding["entity_ids"], [mask_id, mask_id, mask_id]) self.assertEqual(encoding["entity_attention_mask"], [1, 1, 1]) self.assertEqual(encoding["entity_token_type_ids"], [0, 0, 0]) # fmt: off self.assertEqual( encoding["entity_position_ids"], [ [1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [4, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [18, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]] ) # fmt: on self.assertEqual(encoding["entity_start_positions"], [1, 4, 18]) self.assertEqual(encoding["entity_end_positions"], [1, 6, 18]) def test_entity_span_classification_padding_pytorch_tensors(self): tokenizer = self.entity_span_tokenizer sentence = "Japanese is an East Asian language spoken by about 128 million people, primarily in Japan." spans = [(0, 8), (15, 34), (84, 89)] encoding = tokenizer( sentence, entity_spans=spans, return_token_type_ids=True, padding="max_length", max_length=30, max_entity_length=16, return_tensors="pt", ) # test words self.assertEqual(encoding["input_ids"].shape, (1, 30)) self.assertEqual(encoding["attention_mask"].shape, (1, 30)) self.assertEqual(encoding["token_type_ids"].shape, (1, 30)) # test entities self.assertEqual(encoding["entity_ids"].shape, (1, 16)) self.assertEqual(encoding["entity_attention_mask"].shape, (1, 16)) self.assertEqual(encoding["entity_token_type_ids"].shape, (1, 16)) self.assertEqual(encoding["entity_position_ids"].shape, (1, 16, tokenizer.max_mention_length)) self.assertEqual(encoding["entity_start_positions"].shape, (1, 16)) self.assertEqual(encoding["entity_end_positions"].shape, (1, 16))
transformers/tests/models/mluke/test_tokenization_mluke.py/0
{ "file_path": "transformers/tests/models/mluke/test_tokenization_mluke.py", "repo_id": "transformers", "token_count": 14734 }
593
# Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch MobileViTV2 model.""" import unittest from transformers import MobileViTV2Config from transformers.testing_utils import ( Expectations, require_torch, require_torch_multi_gpu, require_vision, slow, torch_device, ) from transformers.utils import cached_property, is_torch_available, is_vision_available from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, floats_tensor, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from transformers import MobileViTV2ForImageClassification, MobileViTV2ForSemanticSegmentation, MobileViTV2Model from transformers.models.mobilevitv2.modeling_mobilevitv2 import ( make_divisible, ) if is_vision_available(): from PIL import Image from transformers import MobileViTImageProcessor class MobileViTV2ConfigTester(ConfigTester): def create_and_test_config_common_properties(self): config = self.config_class(**self.inputs_dict) self.parent.assertTrue(hasattr(config, "width_multiplier")) class MobileViTV2ModelTester: def __init__( self, parent, batch_size=13, image_size=64, patch_size=2, num_channels=3, hidden_act="swish", conv_kernel_size=3, output_stride=32, classifier_dropout_prob=0.1, initializer_range=0.02, is_training=True, use_labels=True, num_labels=10, scope=None, width_multiplier=0.25, ffn_dropout=0.0, attn_dropout=0.0, ): self.parent = parent self.batch_size = batch_size self.image_size = image_size self.patch_size = patch_size self.num_channels = num_channels self.last_hidden_size = make_divisible(512 * width_multiplier, divisor=8) self.hidden_act = hidden_act self.conv_kernel_size = conv_kernel_size self.output_stride = output_stride self.classifier_dropout_prob = classifier_dropout_prob self.use_labels = use_labels self.is_training = is_training self.num_labels = num_labels self.initializer_range = initializer_range self.scope = scope self.width_multiplier = width_multiplier self.ffn_dropout_prob = ffn_dropout self.attn_dropout_prob = attn_dropout def prepare_config_and_inputs(self): pixel_values = floats_tensor([self.batch_size, self.num_channels, self.image_size, self.image_size]) labels = None pixel_labels = None if self.use_labels: labels = ids_tensor([self.batch_size], self.num_labels) pixel_labels = ids_tensor([self.batch_size, self.image_size, self.image_size], self.num_labels) config = self.get_config() return config, pixel_values, labels, pixel_labels def get_config(self): return MobileViTV2Config( image_size=self.image_size, patch_size=self.patch_size, num_channels=self.num_channels, hidden_act=self.hidden_act, conv_kernel_size=self.conv_kernel_size, output_stride=self.output_stride, classifier_dropout_prob=self.classifier_dropout_prob, initializer_range=self.initializer_range, width_multiplier=self.width_multiplier, ffn_dropout=self.ffn_dropout_prob, attn_dropout=self.attn_dropout_prob, base_attn_unit_dims=[16, 24, 32], n_attn_blocks=[1, 1, 2], aspp_out_channels=32, ) def create_and_check_model(self, config, pixel_values, labels, pixel_labels): model = MobileViTV2Model(config=config) model.to(torch_device) model.eval() result = model(pixel_values) self.parent.assertEqual( result.last_hidden_state.shape, ( self.batch_size, self.last_hidden_size, self.image_size // self.output_stride, self.image_size // self.output_stride, ), ) def create_and_check_for_image_classification(self, config, pixel_values, labels, pixel_labels): config.num_labels = self.num_labels model = MobileViTV2ForImageClassification(config) model.to(torch_device) model.eval() result = model(pixel_values, labels=labels) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.num_labels)) def create_and_check_for_semantic_segmentation(self, config, pixel_values, labels, pixel_labels): config.num_labels = self.num_labels model = MobileViTV2ForSemanticSegmentation(config) model.to(torch_device) model.eval() result = model(pixel_values) self.parent.assertEqual( result.logits.shape, ( self.batch_size, self.num_labels, self.image_size // self.output_stride, self.image_size // self.output_stride, ), ) result = model(pixel_values, labels=pixel_labels) self.parent.assertEqual( result.logits.shape, ( self.batch_size, self.num_labels, self.image_size // self.output_stride, self.image_size // self.output_stride, ), ) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, pixel_values, labels, pixel_labels = config_and_inputs inputs_dict = {"pixel_values": pixel_values} return config, inputs_dict @require_torch class MobileViTV2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): """ Here we also overwrite some of the tests of test_modeling_common.py, as MobileViTV2 does not use input_ids, inputs_embeds, attention_mask and seq_length. """ all_model_classes = ( (MobileViTV2Model, MobileViTV2ForImageClassification, MobileViTV2ForSemanticSegmentation) if is_torch_available() else () ) pipeline_model_mapping = ( { "image-feature-extraction": MobileViTV2Model, "image-classification": MobileViTV2ForImageClassification, "image-segmentation": MobileViTV2ForSemanticSegmentation, } if is_torch_available() else {} ) test_pruning = False test_resize_embeddings = False test_head_masking = False has_attentions = False test_torch_exportable = True def setUp(self): self.model_tester = MobileViTV2ModelTester(self) self.config_tester = MobileViTV2ConfigTester(self, config_class=MobileViTV2Config, has_text_modality=False) def test_config(self): self.config_tester.run_common_tests() @unittest.skip(reason="MobileViTV2 does not use inputs_embeds") def test_inputs_embeds(self): pass @unittest.skip(reason="MobileViTV2 does not support input and output embeddings") def test_model_get_set_embeddings(self): pass @unittest.skip(reason="MobileViTV2 does not output attentions") def test_attention_outputs(self): pass @require_torch_multi_gpu @unittest.skip(reason="Got `CUDA error: misaligned address` for tests after this one being run.") def test_multi_gpu_data_parallel_forward(self): pass def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_hidden_states_output(self): def check_hidden_states_output(inputs_dict, config, model_class): model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) hidden_states = outputs.hidden_states expected_num_stages = 5 self.assertEqual(len(hidden_states), expected_num_stages) # MobileViTV2's feature maps are of shape (batch_size, num_channels, height, width) # with the width and height being successively divided by 2. divisor = 2 for i in range(len(hidden_states)): self.assertListEqual( list(hidden_states[i].shape[-2:]), [self.model_tester.image_size // divisor, self.model_tester.image_size // divisor], ) divisor *= 2 self.assertEqual(self.model_tester.output_stride, divisor // 2) config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: inputs_dict["output_hidden_states"] = True check_hidden_states_output(inputs_dict, config, model_class) # check that output_hidden_states also work using config del inputs_dict["output_hidden_states"] config.output_hidden_states = True check_hidden_states_output(inputs_dict, config, model_class) def test_for_image_classification(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_image_classification(*config_and_inputs) def test_for_semantic_segmentation(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_semantic_segmentation(*config_and_inputs) @slow def test_model_from_pretrained(self): model_name = "apple/mobilevitv2-1.0-imagenet1k-256" model = MobileViTV2Model.from_pretrained(model_name) self.assertIsNotNone(model) # We will verify our results on an image of cute cats def prepare_img(): image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png") return image @require_torch @require_vision class MobileViTV2ModelIntegrationTest(unittest.TestCase): @cached_property def default_image_processor(self): return ( MobileViTImageProcessor.from_pretrained("apple/mobilevitv2-1.0-imagenet1k-256") if is_vision_available() else None ) @slow def test_inference_image_classification_head(self): model = MobileViTV2ForImageClassification.from_pretrained("apple/mobilevitv2-1.0-imagenet1k-256").to( torch_device ) image_processor = self.default_image_processor image = prepare_img() inputs = image_processor(images=image, return_tensors="pt").to(torch_device) # forward pass with torch.no_grad(): outputs = model(**inputs) # verify the logits expected_shape = torch.Size((1, 1000)) self.assertEqual(outputs.logits.shape, expected_shape) expectations = Expectations( { (None, None): [-1.6336e00, -7.3204e-02, -5.1883e-01], ("cuda", 8): [-1.6336, -0.0732, -0.5188], } ) expected_slice = torch.tensor(expectations.get_expectation()).to(torch_device) torch.testing.assert_close(outputs.logits[0, :3], expected_slice, rtol=2e-4, atol=2e-4) @slow def test_inference_semantic_segmentation(self): model = MobileViTV2ForSemanticSegmentation.from_pretrained("shehan97/mobilevitv2-1.0-voc-deeplabv3") model = model.to(torch_device) image_processor = MobileViTImageProcessor.from_pretrained("shehan97/mobilevitv2-1.0-voc-deeplabv3") image = prepare_img() inputs = image_processor(images=image, return_tensors="pt").to(torch_device) # forward pass with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits # verify the logits expected_shape = torch.Size((1, 21, 32, 32)) self.assertEqual(logits.shape, expected_shape) expectations = Expectations( { (None, None): [ [[7.0863, 7.1525, 6.8201], [6.6931, 6.8770, 6.8933], [6.2978, 7.0366, 6.9636]], [[-3.7134, -3.6712, -3.6675], [-3.5825, -3.3549, -3.4777], [-3.3435, -3.3979, -3.2857]], [[-2.9329, -2.8003, -2.7369], [-3.0564, -2.4780, -2.0207], [-2.6889, -1.9298, -1.7640]], ], ("cuda", 8): [ [[7.0863, 7.1525, 6.8201], [6.6931, 6.8770, 6.8933], [6.2978, 7.0366, 6.9636]], [[-3.7134, -3.6712, -3.6675], [-3.5825, -3.3549, -3.4777], [-3.3435, -3.3979, -3.2857]], [[-2.9329, -2.8003, -2.7369], [-3.0564, -2.4780, -2.0207], [-2.6889, -1.9298, -1.7640]], ], } ) expected_slice = torch.tensor(expectations.get_expectation()).to(torch_device) torch.testing.assert_close(logits[0, :3, :3, :3], expected_slice, rtol=2e-4, atol=2e-4) @slow def test_post_processing_semantic_segmentation(self): model = MobileViTV2ForSemanticSegmentation.from_pretrained("shehan97/mobilevitv2-1.0-voc-deeplabv3") model = model.to(torch_device) image_processor = MobileViTImageProcessor.from_pretrained("shehan97/mobilevitv2-1.0-voc-deeplabv3") image = prepare_img() inputs = image_processor(images=image, return_tensors="pt").to(torch_device) # forward pass with torch.no_grad(): outputs = model(**inputs) outputs.logits = outputs.logits.detach().cpu() segmentation = image_processor.post_process_semantic_segmentation(outputs=outputs, target_sizes=[(50, 60)]) expected_shape = torch.Size((50, 60)) self.assertEqual(segmentation[0].shape, expected_shape) segmentation = image_processor.post_process_semantic_segmentation(outputs=outputs) expected_shape = torch.Size((32, 32)) self.assertEqual(segmentation[0].shape, expected_shape)
transformers/tests/models/mobilevitv2/test_modeling_mobilevitv2.py/0
{ "file_path": "transformers/tests/models/mobilevitv2/test_modeling_mobilevitv2.py", "repo_id": "transformers", "token_count": 6682 }
594
# Copyright 2024 HuggingFace Inc. team. All rights reserved. # Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch Nemotron model.""" import unittest from transformers import NemotronConfig, is_torch_available from transformers.testing_utils import ( Expectations, require_read_token, require_torch, require_torch_accelerator, slow, torch_device, ) from ...causal_lm_tester import CausalLMModelTest, CausalLMModelTester from ...test_configuration_common import ConfigTester if is_torch_available(): import torch from transformers import ( AutoTokenizer, NemotronForCausalLM, NemotronForQuestionAnswering, NemotronForSequenceClassification, NemotronForTokenClassification, NemotronModel, ) class NemotronModelTester(CausalLMModelTester): if is_torch_available(): config_class = NemotronConfig base_model_class = NemotronModel causal_lm_class = NemotronForCausalLM sequence_class = NemotronForSequenceClassification token_class = NemotronForTokenClassification @require_torch class NemotronModelTest(CausalLMModelTest, unittest.TestCase): model_tester_class = NemotronModelTester # Need to use `0.8` instead of `0.9` for `test_cpu_offload` # This is because we are hitting edge cases with the causal_mask buffer model_split_percents = [0.5, 0.7, 0.8] all_model_classes = ( ( NemotronModel, NemotronForCausalLM, NemotronForSequenceClassification, NemotronForQuestionAnswering, NemotronForTokenClassification, ) if is_torch_available() else () ) pipeline_model_mapping = ( { "feature-extraction": NemotronModel, "text-classification": NemotronForSequenceClassification, "text-generation": NemotronForCausalLM, "zero-shot": NemotronForSequenceClassification, "question-answering": NemotronForQuestionAnswering, "token-classification": NemotronForTokenClassification, } if is_torch_available() else {} ) test_headmasking = False test_pruning = False fx_compatible = False # used in `test_torch_compile_for_training` _torch_compile_train_cls = NemotronForCausalLM if is_torch_available() else None def setUp(self): self.model_tester = NemotronModelTester(self) self.config_tester = ConfigTester(self, config_class=NemotronConfig, hidden_size=37) @unittest.skip("Eager and SDPA do not produce the same outputs, thus this test fails") def test_model_outputs_equivalence(self, **kwargs): pass @require_torch_accelerator class NemotronIntegrationTest(unittest.TestCase): @slow @require_read_token def test_nemotron_8b_generation_sdpa(self): text = ["What is the largest planet in solar system?"] EXPECTED_TEXT = [ "What is the largest planet in solar system?\nAnswer: Jupiter\n\nWhat is the answer", ] model_id = "thhaus/nemotron3-8b" model = NemotronForCausalLM.from_pretrained( model_id, dtype=torch.float16, device_map="auto", attn_implementation="sdpa" ) tokenizer = AutoTokenizer.from_pretrained(model_id) inputs = tokenizer(text, return_tensors="pt").to(torch_device) output = model.generate(**inputs, do_sample=False, max_new_tokens=10) output_text = tokenizer.batch_decode(output, skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT, output_text) @slow @require_read_token def test_nemotron_8b_generation_eager(self): text = ["What is the largest planet in solar system?"] EXPECTED_TEXTS = Expectations( { ("xpu", 3): [ "What is the largest planet in solar system?\nAnswer: Jupiter\n\nWhat is the answer: What is the name of the 19", ], ("cuda", 7): [ "What is the largest planet in solar system?\nAnswer: Jupiter\n\nWhat is the answer", ], } ) EXPECTED_TEXT = EXPECTED_TEXTS.get_expectation() model_id = "thhaus/nemotron3-8b" model = NemotronForCausalLM.from_pretrained( model_id, dtype=torch.float16, device_map="auto", attn_implementation="eager" ) tokenizer = AutoTokenizer.from_pretrained(model_id) inputs = tokenizer(text, return_tensors="pt").to(torch_device) output = model.generate(**inputs, do_sample=False) output_text = tokenizer.batch_decode(output, skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT, output_text) @slow @require_read_token def test_nemotron_8b_generation_fa2(self): text = ["What is the largest planet in solar system?"] EXPECTED_TEXT = [ "What is the largest planet in solar system?\nAnswer: Jupiter\n\nWhat is the answer", ] model_id = "thhaus/nemotron3-8b" model = NemotronForCausalLM.from_pretrained( model_id, dtype=torch.float16, device_map="auto", attn_implementation="flash_attention_2" ) tokenizer = AutoTokenizer.from_pretrained(model_id) inputs = tokenizer(text, return_tensors="pt").to(torch_device) output = model.generate(**inputs, do_sample=False) output_text = tokenizer.batch_decode(output, skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT, output_text)
transformers/tests/models/nemotron/test_modeling_nemotron.py/0
{ "file_path": "transformers/tests/models/nemotron/test_modeling_nemotron.py", "repo_id": "transformers", "token_count": 2559 }
595
# Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch PatchTST model.""" import inspect import random import tempfile import unittest from huggingface_hub import hf_hub_download from transformers import is_torch_available from transformers.models.auto import get_values from transformers.testing_utils import is_flaky, require_torch, slow, torch_device from transformers.utils import check_torch_load_is_safe from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, floats_tensor, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin TOLERANCE = 1e-4 if is_torch_available(): import torch from transformers import ( MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING, MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING, PatchTSTConfig, PatchTSTForClassification, PatchTSTForPrediction, PatchTSTForPretraining, PatchTSTForRegression, PatchTSTModel, ) @require_torch class PatchTSTModelTester: def __init__( self, parent, batch_size=13, prediction_length=7, context_length=14, patch_length=5, patch_stride=5, num_input_channels=1, num_time_features=1, is_training=True, hidden_size=16, num_hidden_layers=2, num_attention_heads=4, intermediate_size=4, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, distil=False, seed=42, num_targets=2, mask_type="random", random_mask_ratio=0, ): self.parent = parent self.batch_size = batch_size self.prediction_length = prediction_length self.context_length = context_length self.patch_length = patch_length self.patch_stride = patch_stride self.num_input_channels = num_input_channels self.num_time_features = num_time_features self.is_training = is_training self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.mask_type = mask_type self.random_mask_ratio = random_mask_ratio self.seed = seed self.num_targets = num_targets self.distil = distil self.num_patches = (max(self.context_length, self.patch_length) - self.patch_length) // self.patch_stride + 1 # define seq_length so that it can pass the test_attention_outputs self.seq_length = self.num_patches def get_config(self): return PatchTSTConfig( prediction_length=self.prediction_length, patch_length=self.patch_length, patch_stride=self.patch_stride, num_input_channels=self.num_input_channels, d_model=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, ffn_dim=self.intermediate_size, dropout=self.hidden_dropout_prob, attention_dropout=self.attention_probs_dropout_prob, context_length=self.context_length, activation_function=self.hidden_act, seed=self.seed, num_targets=self.num_targets, mask_type=self.mask_type, random_mask_ratio=self.random_mask_ratio, ) def prepare_patchtst_inputs_dict(self, config): _past_length = config.context_length # bs, num_input_channels, num_patch, patch_len # [bs x seq_len x num_input_channels] past_values = floats_tensor([self.batch_size, _past_length, self.num_input_channels]) future_values = floats_tensor([self.batch_size, config.prediction_length, self.num_input_channels]) inputs_dict = { "past_values": past_values, "future_values": future_values, } return inputs_dict def prepare_config_and_inputs(self): config = self.get_config() inputs_dict = self.prepare_patchtst_inputs_dict(config) return config, inputs_dict def prepare_config_and_inputs_for_common(self): config, inputs_dict = self.prepare_config_and_inputs() return config, inputs_dict @require_torch class PatchTSTModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = ( ( PatchTSTModel, PatchTSTForPrediction, PatchTSTForPretraining, PatchTSTForClassification, PatchTSTForRegression, ) if is_torch_available() else () ) pipeline_model_mapping = {"feature-extraction": PatchTSTModel} if is_torch_available() else {} is_encoder_decoder = False test_pruning = False test_head_masking = False test_missing_keys = True test_torchscript = False test_inputs_embeds = False test_resize_embeddings = True test_resize_position_embeddings = False test_mismatched_shapes = True test_model_parallel = False has_attentions = True def setUp(self): self.model_tester = PatchTSTModelTester(self) self.config_tester = ConfigTester( self, config_class=PatchTSTConfig, has_text_modality=False, prediction_length=self.model_tester.prediction_length, ) def test_config(self): self.config_tester.run_common_tests() def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): inputs_dict = super()._prepare_for_class(inputs_dict, model_class, return_labels=return_labels) # if PatchTSTForPretraining if model_class == PatchTSTForPretraining: inputs_dict.pop("future_values") # else if classification model: elif model_class in get_values(MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING): rng = random.Random(self.model_tester.seed) labels = ids_tensor([self.model_tester.batch_size], self.model_tester.num_targets, rng=rng) inputs_dict["target_values"] = labels inputs_dict.pop("future_values") elif model_class in get_values(MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING): rng = random.Random(self.model_tester.seed) target_values = floats_tensor([self.model_tester.batch_size, self.model_tester.num_targets], rng=rng) inputs_dict["target_values"] = target_values inputs_dict.pop("future_values") return inputs_dict def test_save_load_strict(self): config, _ = self.model_tester.prepare_config_and_inputs() for model_class in self.all_model_classes: model = model_class(config) with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) model2, info = model_class.from_pretrained(tmpdirname, output_loading_info=True) self.assertEqual(info["missing_keys"], []) def test_hidden_states_output(self): def check_hidden_states_output(inputs_dict, config, model_class): model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) hidden_states = outputs.hidden_states expected_num_layers = getattr( self.model_tester, "expected_num_hidden_layers", self.model_tester.num_hidden_layers ) self.assertEqual(len(hidden_states), expected_num_layers) num_patch = self.model_tester.num_patches self.assertListEqual( list(hidden_states[0].shape[-2:]), [num_patch, self.model_tester.hidden_size], ) config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: inputs_dict["output_hidden_states"] = True check_hidden_states_output(inputs_dict, config, model_class) # check that output_hidden_states also work using config del inputs_dict["output_hidden_states"] config.output_hidden_states = True check_hidden_states_output(inputs_dict, config, model_class) @unittest.skip(reason="we have no tokens embeddings") def test_resize_tokens_embeddings(self): pass def test_model_main_input_name(self): model_signature = inspect.signature(getattr(PatchTSTModel, "forward")) # The main input is the name of the argument after `self` observed_main_input_name = list(model_signature.parameters.keys())[1] self.assertEqual(PatchTSTModel.main_input_name, observed_main_input_name) def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] if model_class == PatchTSTForPretraining: expected_arg_names = [ "past_values", "past_observed_mask", ] elif model_class in get_values(MODEL_FOR_TIME_SERIES_CLASSIFICATION_MAPPING) or model_class in get_values( MODEL_FOR_TIME_SERIES_REGRESSION_MAPPING ): expected_arg_names = ["past_values", "target_values", "past_observed_mask"] else: expected_arg_names = [ "past_values", "past_observed_mask", "future_values", ] expected_arg_names.extend( [ "output_hidden_states", "output_attentions", "return_dict", ] ) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) @is_flaky() def test_retain_grad_hidden_states_attentions(self): super().test_retain_grad_hidden_states_attentions() @unittest.skip(reason="Model does not have input embeddings") def test_model_get_set_embeddings(self): pass def prepare_batch(repo_id="hf-internal-testing/etth1-hourly-batch", file="train-batch.pt"): file = hf_hub_download(repo_id=repo_id, filename=file, repo_type="dataset") check_torch_load_is_safe() batch = torch.load(file, map_location=torch_device, weights_only=True) return batch # Note: Pretrained model is not yet downloadable. @require_torch @slow class PatchTSTModelIntegrationTests(unittest.TestCase): # Publishing of pretrained weights are under internal review. Pretrained model is not yet downloadable. def test_pretrain_head(self): model = PatchTSTForPretraining.from_pretrained("namctin/patchtst_etth1_pretrain").to(torch_device) batch = prepare_batch() torch.manual_seed(0) with torch.no_grad(): output = model(past_values=batch["past_values"].to(torch_device)).prediction_output num_patch = ( max(model.config.context_length, model.config.patch_length) - model.config.patch_length ) // model.config.patch_stride + 1 expected_shape = torch.Size([64, model.config.num_input_channels, num_patch, model.config.patch_length]) self.assertEqual(output.shape, expected_shape) expected_slice = torch.tensor( [[[-0.0173]], [[-1.0379]], [[-0.1030]], [[0.3642]], [[0.1601]], [[-1.3136]], [[0.8780]]], device=torch_device, ) torch.testing.assert_close(output[0, :7, :1, :1], expected_slice, rtol=TOLERANCE, atol=TOLERANCE) # Publishing of pretrained weights are under internal review. Pretrained model is not yet downloadable. def test_prediction_head(self): model = PatchTSTForPrediction.from_pretrained("namctin/patchtst_etth1_forecast").to(torch_device) batch = prepare_batch(file="test-batch.pt") torch.manual_seed(0) with torch.no_grad(): output = model( past_values=batch["past_values"].to(torch_device), future_values=batch["future_values"].to(torch_device), ).prediction_outputs expected_shape = torch.Size([64, model.config.prediction_length, model.config.num_input_channels]) self.assertEqual(output.shape, expected_shape) expected_slice = torch.tensor( [[0.5142, 0.6928, 0.6118, 0.5724, -0.3735, -0.1336, -0.7124]], device=torch_device, ) torch.testing.assert_close(output[0, :1, :7], expected_slice, rtol=TOLERANCE, atol=TOLERANCE) def test_prediction_generation(self): model = PatchTSTForPrediction.from_pretrained("namctin/patchtst_etth1_forecast").to(torch_device) batch = prepare_batch(file="test-batch.pt") torch.manual_seed(0) with torch.no_grad(): outputs = model.generate(past_values=batch["past_values"].to(torch_device)) expected_shape = torch.Size((64, 1, model.config.prediction_length, model.config.num_input_channels)) self.assertEqual(outputs.sequences.shape, expected_shape) expected_slice = torch.tensor( [[0.4075, 0.3716, 0.4786, 0.2842, -0.3107, -0.0569, -0.7489]], device=torch_device, ) mean_prediction = outputs.sequences.mean(dim=1) torch.testing.assert_close(mean_prediction[0, -1:], expected_slice, rtol=TOLERANCE, atol=TOLERANCE) def test_regression_generation(self): model = PatchTSTForRegression.from_pretrained("ibm/patchtst-etth1-regression-distribution").to(torch_device) batch = prepare_batch(repo_id="ibm/patchtst-etth1-test-data", file="regression_distribution_batch.pt") torch.manual_seed(0) model.eval() with torch.no_grad(): outputs = model.generate(past_values=batch["past_values"].to(torch_device)) expected_shape = torch.Size((64, model.config.num_parallel_samples, model.config.num_targets)) self.assertEqual(outputs.sequences.shape, expected_shape) expected_slice = torch.tensor( [[-0.08046409], [-0.06570087], [-0.28218266], [-0.20636195], [-0.11787311]], device=torch_device, ) mean_prediction = outputs.sequences.mean(dim=1) torch.testing.assert_close(mean_prediction[-5:], expected_slice, rtol=TOLERANCE, atol=TOLERANCE)
transformers/tests/models/patchtst/test_modeling_patchtst.py/0
{ "file_path": "transformers/tests/models/patchtst/test_modeling_patchtst.py", "repo_id": "transformers", "token_count": 6898 }
596
# Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch Persimmon model.""" import gc import unittest from transformers import PersimmonConfig, is_torch_available from transformers.testing_utils import ( backend_empty_cache, require_bitsandbytes, require_torch, require_torch_accelerator, require_torch_fp16, slow, torch_device, ) if is_torch_available(): import torch from transformers import ( AutoTokenizer, PersimmonForCausalLM, PersimmonForSequenceClassification, PersimmonForTokenClassification, PersimmonModel, ) from ...causal_lm_tester import CausalLMModelTest, CausalLMModelTester class PersimmonModelTester(CausalLMModelTester): if is_torch_available(): config_class = PersimmonConfig base_model_class = PersimmonModel causal_lm_class = PersimmonForCausalLM sequence_class = PersimmonForSequenceClassification token_class = PersimmonForTokenClassification @require_torch class PersimmonModelTest(CausalLMModelTest, unittest.TestCase): model_tester_class = PersimmonModelTester all_model_classes = ( (PersimmonModel, PersimmonForCausalLM, PersimmonForSequenceClassification, PersimmonForTokenClassification) if is_torch_available() else () ) pipeline_model_mapping = ( { "feature-extraction": PersimmonModel, "text-classification": PersimmonForSequenceClassification, "token-classification": PersimmonForTokenClassification, # TODO (ydshieh): check why these two fail. Fix them or skip them in a better way. # "text-generation": PersimmonForCausalLM, # "zero-shot": PersimmonForSequenceClassification, } if is_torch_available() else {} ) model_tester_class = PersimmonModelTester test_headmasking = False test_pruning = False @unittest.skip("Persimmon applies key/query norm which doesn't work with packing") def test_eager_padding_matches_padding_free_with_position_ids(self): pass @unittest.skip("Persimmon applies key/query norm which doesn't work with packing") def test_sdpa_padding_matches_padding_free_with_position_ids(self): pass @require_torch class PersimmonIntegrationTest(unittest.TestCase): @slow @require_torch_accelerator @require_bitsandbytes def test_model_8b_chat_logits(self): input_ids = [1, 306, 4658, 278, 6593, 310, 2834, 338] model = PersimmonForCausalLM.from_pretrained( "adept/persimmon-8b-chat", load_in_8bit=True, device_map={"": 0}, dtype=torch.float16 ) out = model(torch.tensor([input_ids], device=torch_device)).logits.float() EXPECTED_MEAN = torch.tensor( [[-11.4726, -11.1495, -11.2694, -11.2223, -10.9452, -11.0663, -11.0031, -11.1028]] ) # change dtype to `torch.float32` before calling `mean` to avoid `nan` values torch.testing.assert_close(out.cpu().to(torch.float32).mean(-1), EXPECTED_MEAN, rtol=1e-4, atol=1e-4) # fmt: off EXPECTED_SLICE = torch.tensor( [-16.9062, -16.9062, -16.9062, -16.9062, -16.8906, -16.9062, -16.9531, -16.9062, -16.9062, -16.9062, -16.9531, -16.9062, -16.9531, -16.9062, -16.9062, -16.9062, -16.9062, -16.9062, -16.9531, -16.9062, -16.9062, -16.9062, -16.9062, -16.9062, -16.9062, -16.9531, -16.9062, -16.9531, -16.9062, -16.9062], dtype=torch.float16 ) # fmt: on torch.testing.assert_close(out.cpu()[0, 0, :30], EXPECTED_SLICE, rtol=1e-5, atol=1e-5) backend_empty_cache(torch_device) del model gc.collect() @slow @require_torch_accelerator @require_torch_fp16 @require_bitsandbytes def test_model_8b_chat_greedy_generation(self): EXPECTED_TEXT_COMPLETION = """human: Simply put, the theory of relativity states that?\n\nadept: The theory of relativity states that the laws of physics are the same for all observers, regardless of their relative motion.""" prompt = "human: Simply put, the theory of relativity states that?\n\nadept:" tokenizer = AutoTokenizer.from_pretrained("adept/persimmon-8b-chat", use_fast=False) input_ids = tokenizer.encode(prompt, return_tensors="pt").to(torch_device) model = PersimmonForCausalLM.from_pretrained( "adept/persimmon-8b-chat", load_in_8bit=True, device_map={"": 0}, dtype=torch.float16 ) # greedy generation outputs generated_ids = model.generate(input_ids, max_new_tokens=64) text = tokenizer.decode(generated_ids[0], skip_special_tokens=True) self.assertEqual(EXPECTED_TEXT_COMPLETION, text) backend_empty_cache(torch_device) del model gc.collect()
transformers/tests/models/persimmon/test_modeling_persimmon.py/0
{ "file_path": "transformers/tests/models/persimmon/test_modeling_persimmon.py", "repo_id": "transformers", "token_count": 2278 }
597
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import shutil import tempfile import unittest import pytest from transformers.testing_utils import require_torch, require_vision from transformers.utils import is_vision_available from ...test_processing_common import ProcessorTesterMixin if is_vision_available(): from transformers import ( AutoProcessor, Pix2StructImageProcessor, Pix2StructProcessor, PreTrainedTokenizerFast, T5Tokenizer, ) @require_vision @require_torch class Pix2StructProcessorTest(ProcessorTesterMixin, unittest.TestCase): processor_class = Pix2StructProcessor text_input_name = "decoder_input_ids" images_input_name = "flattened_patches" @classmethod def setUpClass(cls): cls.tmpdirname = tempfile.mkdtemp() image_processor = Pix2StructImageProcessor() tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small") processor = Pix2StructProcessor(image_processor, tokenizer) processor.save_pretrained(cls.tmpdirname) def get_tokenizer(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).tokenizer def get_image_processor(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).image_processor @classmethod def tearDownClass(cls): shutil.rmtree(cls.tmpdirname, ignore_errors=True) def test_save_load_pretrained_additional_features(self): with tempfile.TemporaryDirectory() as tmpdir: processor = Pix2StructProcessor(tokenizer=self.get_tokenizer(), image_processor=self.get_image_processor()) processor.save_pretrained(tmpdir) tokenizer_add_kwargs = self.get_tokenizer(bos_token="(BOS)", eos_token="(EOS)") image_processor_add_kwargs = self.get_image_processor(do_normalize=False, padding_value=1.0) processor = Pix2StructProcessor.from_pretrained( tmpdir, bos_token="(BOS)", eos_token="(EOS)", do_normalize=False, padding_value=1.0 ) self.assertEqual(processor.tokenizer.get_vocab(), tokenizer_add_kwargs.get_vocab()) self.assertIsInstance(processor.tokenizer, PreTrainedTokenizerFast) self.assertEqual(processor.image_processor.to_json_string(), image_processor_add_kwargs.to_json_string()) self.assertIsInstance(processor.image_processor, Pix2StructImageProcessor) def test_image_processor(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = Pix2StructProcessor(tokenizer=tokenizer, image_processor=image_processor) image_input = self.prepare_image_inputs() input_feat_extract = image_processor(image_input, return_tensors="np") input_processor = processor(images=image_input, return_tensors="np") for key in input_feat_extract: self.assertAlmostEqual(input_feat_extract[key].sum(), input_processor[key].sum(), delta=1e-2) def test_tokenizer(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = Pix2StructProcessor(tokenizer=tokenizer, image_processor=image_processor) input_str = self.prepare_text_inputs() encoded_processor = processor(text=input_str) encoded_tok = tokenizer(input_str, return_token_type_ids=False, add_special_tokens=True) for key in encoded_tok: self.assertListEqual(encoded_tok[key], encoded_processor[key]) def test_processor(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = Pix2StructProcessor(tokenizer=tokenizer, image_processor=image_processor) input_str = self.prepare_text_inputs() image_input = self.prepare_image_inputs() inputs = processor(text=input_str, images=image_input) self.assertListEqual( list(inputs.keys()), ["flattened_patches", "attention_mask", "decoder_attention_mask", "decoder_input_ids"] ) # test if it raises when no input is passed with pytest.raises(ValueError): processor() def test_processor_max_patches(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = Pix2StructProcessor(tokenizer=tokenizer, image_processor=image_processor) input_str = self.prepare_text_inputs() image_input = self.prepare_image_inputs() inputs = processor(text=input_str, images=image_input) max_patches = [512, 1024, 2048, 4096] expected_hidden_size = [770, 770, 770, 770] # with text for i, max_patch in enumerate(max_patches): inputs = processor(text=input_str, images=image_input, max_patches=max_patch) self.assertEqual(inputs["flattened_patches"][0].shape[0], max_patch) self.assertEqual(inputs["flattened_patches"][0].shape[1], expected_hidden_size[i]) # without text input for i, max_patch in enumerate(max_patches): inputs = processor(images=image_input, max_patches=max_patch) self.assertEqual(inputs["flattened_patches"][0].shape[0], max_patch) self.assertEqual(inputs["flattened_patches"][0].shape[1], expected_hidden_size[i]) def test_tokenizer_decode(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() processor = Pix2StructProcessor(tokenizer=tokenizer, image_processor=image_processor) predicted_ids = [[1, 4, 5, 8, 1, 0, 8], [3, 4, 3, 1, 1, 8, 9]] decoded_processor = processor.batch_decode(predicted_ids) decoded_tok = tokenizer.batch_decode(predicted_ids) self.assertListEqual(decoded_tok, decoded_processor) @require_torch @require_vision def test_image_processor_defaults_preserved_by_image_kwargs(self): # Rewrite as pix2struct processor return "flattened_patches" and not "pixel_values" if "image_processor" not in self.processor_class.attributes: self.skipTest(f"image_processor attribute not present in {self.processor_class}") image_processor = self.get_component("image_processor", max_patches=1024, patch_size={"height": 8, "width": 8}) tokenizer = self.get_component("tokenizer", max_length=117, padding="max_length") processor = self.processor_class(tokenizer=tokenizer, image_processor=image_processor) self.skip_processor_without_typed_kwargs(processor) input_str = self.prepare_text_inputs() image_input = self.prepare_image_inputs() inputs = processor(text=input_str, images=image_input) self.assertEqual(len(inputs["flattened_patches"][0][0]), 194) @require_torch @require_vision def test_kwargs_overrides_default_image_processor_kwargs(self): # Rewrite as pix2struct processor return "flattened_patches" and not "pixel_values" if "image_processor" not in self.processor_class.attributes: self.skipTest(f"image_processor attribute not present in {self.processor_class}") image_processor = self.get_component("image_processor", max_patches=4096) tokenizer = self.get_component("tokenizer", max_length=117, padding="max_length") processor = self.processor_class(tokenizer=tokenizer, image_processor=image_processor) self.skip_processor_without_typed_kwargs(processor) input_str = self.prepare_text_inputs() image_input = self.prepare_image_inputs() inputs = processor(text=input_str, images=image_input, max_patches=1024) self.assertEqual(len(inputs["flattened_patches"][0]), 1024) @require_torch @require_vision def test_unstructured_kwargs(self): # Rewrite as pix2struct processor return "decoder_input_ids" and not "input_ids" if "image_processor" not in self.processor_class.attributes: self.skipTest(f"image_processor attribute not present in {self.processor_class}") image_processor = self.get_component("image_processor") tokenizer = self.get_component("tokenizer") processor = self.processor_class(tokenizer=tokenizer, image_processor=image_processor) self.skip_processor_without_typed_kwargs(processor) input_str = self.prepare_text_inputs() image_input = self.prepare_image_inputs() inputs = processor( text=input_str, images=image_input, return_tensors="pt", max_patches=1024, padding="max_length", max_length=76, ) self.assertEqual(inputs["flattened_patches"].shape[1], 1024) self.assertEqual(len(inputs["decoder_input_ids"][0]), 76) @require_torch @require_vision def test_unstructured_kwargs_batched(self): # Rewrite as pix2struct processor return "decoder_input_ids" and not "input_ids" if "image_processor" not in self.processor_class.attributes: self.skipTest(f"image_processor attribute not present in {self.processor_class}") image_processor = self.get_component("image_processor") tokenizer = self.get_component("tokenizer") processor = self.processor_class(tokenizer=tokenizer, image_processor=image_processor) self.skip_processor_without_typed_kwargs(processor) input_str = self.prepare_text_inputs(batch_size=2) image_input = self.prepare_image_inputs(batch_size=2) inputs = processor( text=input_str, images=image_input, return_tensors="pt", max_patches=1024, padding="longest", max_length=76, ) self.assertEqual(inputs["flattened_patches"].shape[1], 1024) self.assertEqual(len(inputs["decoder_input_ids"][0]), 5) @require_torch @require_vision def test_structured_kwargs_nested(self): # Rewrite as pix2struct processor return "decoder_input_ids" and not "input_ids" if "image_processor" not in self.processor_class.attributes: self.skipTest(f"image_processor attribute not present in {self.processor_class}") image_processor = self.get_component("image_processor") tokenizer = self.get_component("tokenizer") processor = self.processor_class(tokenizer=tokenizer, image_processor=image_processor) self.skip_processor_without_typed_kwargs(processor) input_str = self.prepare_text_inputs() image_input = self.prepare_image_inputs() # Define the kwargs for each modality all_kwargs = { "common_kwargs": {"return_tensors": "pt"}, "images_kwargs": {"max_patches": 1024}, "text_kwargs": {"padding": "max_length", "max_length": 76}, } inputs = processor(text=input_str, images=image_input, **all_kwargs) self.skip_processor_without_typed_kwargs(processor) self.assertEqual(inputs["flattened_patches"].shape[1], 1024) self.assertEqual(len(inputs["decoder_input_ids"][0]), 76) @require_torch @require_vision def test_structured_kwargs_nested_from_dict(self): # Rewrite as pix2struct processor return "decoder_input_ids" and not "input_ids" if "image_processor" not in self.processor_class.attributes: self.skipTest(f"image_processor attribute not present in {self.processor_class}") image_processor = self.get_component("image_processor") tokenizer = self.get_component("tokenizer") processor = self.processor_class(tokenizer=tokenizer, image_processor=image_processor) self.skip_processor_without_typed_kwargs(processor) input_str = self.prepare_text_inputs() image_input = self.prepare_image_inputs() # Define the kwargs for each modality all_kwargs = { "common_kwargs": {"return_tensors": "pt"}, "images_kwargs": {"max_patches": 1024}, "text_kwargs": {"padding": "max_length", "max_length": 76}, } inputs = processor(text=input_str, images=image_input, **all_kwargs) self.assertEqual(inputs["flattened_patches"].shape[1], 1024) self.assertEqual(len(inputs["decoder_input_ids"][0]), 76)
transformers/tests/models/pix2struct/test_processing_pix2struct.py/0
{ "file_path": "transformers/tests/models/pix2struct/test_processing_pix2struct.py", "repo_id": "transformers", "token_count": 5152 }
598
# coding=utf-8 # Copyright 2025 The Qwen team, Alibaba Group and the HuggingFace Inc. team. All rights reserved. # # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import shutil import tempfile import unittest import numpy as np import pytest from huggingface_hub import hf_hub_download from transformers import ( AutoProcessor, Qwen2_5OmniProcessor, Qwen2TokenizerFast, WhisperFeatureExtractor, ) from transformers.testing_utils import ( require_av, require_librosa, require_torch, require_torchaudio, require_torchvision, require_vision, ) from transformers.utils import is_torch_available, is_vision_available from ...test_processing_common import ProcessorTesterMixin if is_torch_available(): import torch if is_vision_available(): from transformers import Qwen2VLImageProcessorFast @require_vision @require_torch @require_torchaudio @require_torchvision class Qwen2_5OmniProcessorTest(ProcessorTesterMixin, unittest.TestCase): processor_class = Qwen2_5OmniProcessor # text + audio kwargs testing @require_torch def test_tokenizer_defaults_preserved_by_kwargs_audio(self): if "feature_extractor" not in self.processor_class.attributes: self.skipTest(f"feature_extractor attribute not present in {self.processor_class}") feature_extractor = self.get_component("feature_extractor") if hasattr(self, "get_tokenizer"): tokenizer = self.get_tokenizer(max_length=800, padding="max_length") elif hasattr(self, "get_component"): tokenizer = self.get_component("tokenizer", max_length=800, padding="max_length") else: self.assertTrue(False, "Processor doesn't have get_tokenizer or get_component defined") if not tokenizer.pad_token: tokenizer.pad_token = "[TEST_PAD]" if "image_processor" not in self.processor_class.attributes: self.skipTest(f"image_processor attribute not present in {self.processor_class}") image_processor = self.get_component("image_processor") video_processor = self.get_component("video_processor") processor = self.processor_class( tokenizer=tokenizer, video_processor=video_processor, feature_extractor=feature_extractor, image_processor=image_processor, ) self.skip_processor_without_typed_kwargs(processor) input_str = "lower newer" raw_speech = self.prepare_audio_inputs() inputs = processor(text=input_str, audio=raw_speech, return_tensors="pt") if "input_ids" in inputs: self.assertEqual(len(inputs["input_ids"][0]), 800) elif "labels" in inputs: self.assertEqual(len(inputs["labels"][0]), 800) @require_torch @require_vision def test_structured_kwargs_audio_nested(self): if "feature_extractor" not in self.processor_class.attributes: self.skipTest(f"feature_extractor attribute not present in {self.processor_class}") feature_extractor = self.get_component("feature_extractor") if hasattr(self, "get_tokenizer"): tokenizer = self.get_tokenizer() elif hasattr(self, "get_component"): tokenizer = self.get_component("tokenizer") if not tokenizer.pad_token: tokenizer.pad_token = "[TEST_PAD]" if "image_processor" not in self.processor_class.attributes: self.skipTest(f"image_processor attribute not present in {self.processor_class}") image_processor = self.get_component("image_processor") video_processor = self.get_component("video_processor") processor = self.processor_class( tokenizer=tokenizer, video_processor=video_processor, feature_extractor=feature_extractor, image_processor=image_processor, ) self.skip_processor_without_typed_kwargs(processor) input_str = ["lower newer"] raw_speech = self.prepare_audio_inputs() # Define the kwargs for each modality all_kwargs = { "common_kwargs": {"return_tensors": "pt"}, "audio_kwargs": {"max_length": 800}, } inputs = processor(text=input_str, audio=raw_speech, **all_kwargs) if "input_ids" in inputs: self.assertEqual(len(inputs["input_ids"][0]), 2) elif "labels" in inputs: self.assertEqual(len(inputs["labels"][0]), 2) @require_torch def test_unstructured_kwargs_audio(self): if "feature_extractor" not in self.processor_class.attributes: self.skipTest(f"feature_extractor attribute not present in {self.processor_class}") feature_extractor = self.get_component("feature_extractor") if hasattr(self, "get_tokenizer"): tokenizer = self.get_tokenizer(max_length=117) elif hasattr(self, "get_component"): tokenizer = self.get_component("tokenizer", max_length=117) if not tokenizer.pad_token: tokenizer.pad_token = "[TEST_PAD]" if "image_processor" not in self.processor_class.attributes: self.skipTest(f"image_processor attribute not present in {self.processor_class}") image_processor = self.get_component("image_processor") video_processor = self.get_component("video_processor") processor = self.processor_class( tokenizer=tokenizer, video_processor=video_processor, feature_extractor=feature_extractor, image_processor=image_processor, ) self.skip_processor_without_typed_kwargs(processor) input_str = "lower newer" raw_speech = self.prepare_audio_inputs() inputs = processor( text=input_str, audio=raw_speech, return_tensors="pt", padding="max_length", max_length=800, ) if "input_ids" in inputs: self.assertEqual(len(inputs["input_ids"][0]), 800) elif "labels" in inputs: self.assertEqual(len(inputs["labels"][0]), 800) @require_torch def test_doubly_passed_kwargs_audio(self): if "feature_extractor" not in self.processor_class.attributes: self.skipTest(f"feature_extractor attribute not present in {self.processor_class}") feature_extractor = self.get_component("feature_extractor") if hasattr(self, "get_tokenizer"): tokenizer = self.get_tokenizer() elif hasattr(self, "get_component"): tokenizer = self.get_component("tokenizer") if not tokenizer.pad_token: tokenizer.pad_token = "[TEST_PAD]" if "image_processor" not in self.processor_class.attributes: self.skipTest(f"image_processor attribute not present in {self.processor_class}") image_processor = self.get_component("image_processor") video_processor = self.get_component("video_processor") _ = self.processor_class( tokenizer=tokenizer, video_processor=video_processor, feature_extractor=feature_extractor, image_processor=image_processor, ) # Why delete test? TODO: raushan double check tests after cleaning model @require_torch def test_kwargs_overrides_default_tokenizer_kwargs_audio(self): if "feature_extractor" not in self.processor_class.attributes: self.skipTest(f"feature_extractor attribute not present in {self.processor_class}") feature_extractor = self.get_component("feature_extractor") if hasattr(self, "get_tokenizer"): tokenizer = self.get_tokenizer(max_length=117) elif hasattr(self, "get_component"): tokenizer = self.get_component("tokenizer", max_length=117) if not tokenizer.pad_token: tokenizer.pad_token = "[TEST_PAD]" if "image_processor" not in self.processor_class.attributes: self.skipTest(f"image_processor attribute not present in {self.processor_class}") image_processor = self.get_component("image_processor") video_processor = self.get_component("video_processor") _ = self.processor_class( tokenizer=tokenizer, video_processor=video_processor, feature_extractor=feature_extractor, image_processor=image_processor, ) @classmethod def setUpClass(cls): cls.tmpdirname = tempfile.mkdtemp() processor = Qwen2_5OmniProcessor.from_pretrained("Qwen/Qwen2.5-Omni-7B") processor.save_pretrained(cls.tmpdirname) def get_tokenizer(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).tokenizer def get_image_processor(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).image_processor def get_video_processor(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).video_processor def get_feature_extractor(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs).feature_extractor def get_processor(self, **kwargs): return AutoProcessor.from_pretrained(self.tmpdirname, **kwargs) @classmethod def tearDownClass(cls): shutil.rmtree(cls.tmpdirname, ignore_errors=True) def prepare_audio_inputs(self): """This function prepares a list of numpy audios.""" audio_inputs = [np.random.rand(160000) * 2 - 1] * 3 # batch-size=3 return audio_inputs def test_save_load_pretrained_default(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() feature_extractor = self.get_feature_extractor() video_processor = self.get_video_processor() processor = self.processor_class( tokenizer=tokenizer, video_processor=video_processor, feature_extractor=feature_extractor, image_processor=image_processor, ) processor.save_pretrained(self.tmpdirname) processor = Qwen2_5OmniProcessor.from_pretrained(self.tmpdirname, use_fast=True) self.assertEqual(processor.tokenizer.get_vocab(), tokenizer.get_vocab()) self.assertEqual(processor.image_processor.to_json_string(), image_processor.to_json_string()) self.assertEqual(processor.feature_extractor.to_json_string(), feature_extractor.to_json_string()) self.assertIsInstance(processor.tokenizer, Qwen2TokenizerFast) self.assertIsInstance(processor.image_processor, Qwen2VLImageProcessorFast) self.assertIsInstance(processor.feature_extractor, WhisperFeatureExtractor) def test_image_processor(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() feature_extractor = self.get_feature_extractor() video_processor = self.get_video_processor() processor = self.processor_class( tokenizer=tokenizer, video_processor=video_processor, feature_extractor=feature_extractor, image_processor=image_processor, ) image_input = self.prepare_image_inputs() input_image_proc = image_processor(image_input, return_tensors="pt") input_processor = processor(images=image_input, text="dummy", return_tensors="pt") for key in input_image_proc: self.assertAlmostEqual(input_image_proc[key].sum(), input_processor[key].sum(), delta=1e-2) def test_processor(self): image_processor = self.get_image_processor() tokenizer = self.get_tokenizer() feature_extractor = self.get_feature_extractor() video_processor = self.get_video_processor() processor = self.processor_class( tokenizer=tokenizer, video_processor=video_processor, feature_extractor=feature_extractor, image_processor=image_processor, ) input_str = "lower newer" image_input = self.prepare_image_inputs() audio_input = self.prepare_audio_inputs() inputs = processor(text=input_str, images=image_input, audio=audio_input) keys = list(inputs.keys()) self.assertListEqual( keys, [ "input_ids", "attention_mask", "pixel_values", "image_grid_thw", "feature_attention_mask", "input_features", ], ) # test if it raises when no input is passed with pytest.raises(ValueError): processor() # test if it raises when no text is passed with pytest.raises(ValueError): processor(images=image_input) @require_torch def _test_apply_chat_template( self, modality: str, batch_size: int, return_tensors: str, input_name: str, processor_name: str, input_data: list[str], ): processor = self.get_processor() if processor.chat_template is None: self.skipTest("Processor has no chat template") if processor_name not in self.processor_class.attributes: self.skipTest(f"{processor_name} attribute not present in {self.processor_class}") batch_messages = [ [ { "role": "user", "content": [{"type": "text", "text": "Describe this."}], }, ] ] * batch_size # Test that jinja can be applied formatted_prompt = processor.apply_chat_template(batch_messages, add_generation_prompt=True, tokenize=False) self.assertEqual(len(formatted_prompt), batch_size) # Test that tokenizing with template and directly with `self.tokenizer` gives same output formatted_prompt_tokenized = processor.apply_chat_template( batch_messages, add_generation_prompt=True, tokenize=True, return_tensors=return_tensors ) add_special_tokens = True if processor.tokenizer.bos_token is not None and formatted_prompt[0].startswith(processor.tokenizer.bos_token): add_special_tokens = False tok_output = processor.tokenizer( formatted_prompt, return_tensors=return_tensors, add_special_tokens=add_special_tokens ) expected_output = tok_output.input_ids self.assertListEqual(expected_output.tolist(), formatted_prompt_tokenized.tolist()) # Test that kwargs passed to processor's `__call__` are actually used tokenized_prompt_100 = processor.apply_chat_template( batch_messages, add_generation_prompt=True, tokenize=True, padding="max_length", truncation=True, return_tensors=return_tensors, max_length=100, ) self.assertEqual(len(tokenized_prompt_100[0]), 100) # Test that `return_dict=True` returns text related inputs in the dict out_dict_text = processor.apply_chat_template( batch_messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors=return_tensors, ) self.assertTrue(all(key in out_dict_text for key in ["input_ids", "attention_mask"])) self.assertEqual(len(out_dict_text["input_ids"]), batch_size) self.assertEqual(len(out_dict_text["attention_mask"]), batch_size) # Test that with modality URLs and `return_dict=True`, we get modality inputs in the dict for idx, url in enumerate(input_data[:batch_size]): batch_messages[idx][0]["content"] = [batch_messages[idx][0]["content"][0], {"type": modality, "url": url}] out_dict = processor.apply_chat_template( batch_messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors=return_tensors, num_frames=2, # by default no more than 2 frames, otherwise too slow ) input_name = getattr(self, input_name) self.assertTrue(input_name in out_dict) self.assertEqual(len(out_dict["input_ids"]), batch_size) self.assertEqual(len(out_dict["attention_mask"]), batch_size) if modality == "video": # qwen pixels don't scale with bs same way as other models, calculate expected video token count based on video_grid_thw expected_video_token_count = 0 for thw in out_dict["video_grid_thw"]: expected_video_token_count += thw[0] * thw[1] * thw[2] mm_len = expected_video_token_count else: mm_len = batch_size * 1564 self.assertEqual(len(out_dict[input_name]), mm_len) return_tensor_to_type = {"pt": torch.Tensor, "np": np.ndarray, None: list} for k in out_dict: self.assertIsInstance(out_dict[k], return_tensor_to_type[return_tensors]) @require_av def test_apply_chat_template_video_frame_sampling(self): processor = self.get_processor() if processor.chat_template is None: self.skipTest("Processor has no chat template") signature = inspect.signature(processor.__call__) if "videos" not in {*signature.parameters.keys()} or ( signature.parameters.get("videos") is not None and signature.parameters["videos"].annotation == inspect._empty ): self.skipTest("Processor doesn't accept videos at input") messages = [ [ { "role": "user", "content": [ {"type": "text", "text": "What is shown in this video?"}, ], }, ] ] formatted_prompt = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) self.assertEqual(len(formatted_prompt), 1) formatted_prompt_tokenized = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True) expected_output = processor.tokenizer(formatted_prompt, return_tensors=None).input_ids self.assertListEqual(expected_output, formatted_prompt_tokenized) out_dict = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True) self.assertListEqual(list(out_dict.keys()), ["input_ids", "attention_mask"]) # Add video URL for return dict and load with `num_frames` arg messages[0][0]["content"].append( { "type": "video", "url": "https://huggingface.co/datasets/raushan-testing-hf/videos-test/resolve/main/Big_Buck_Bunny_720_10s_10MB.mp4", } ) num_frames = 3 out_dict_with_video = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, num_frames=num_frames, ) self.assertTrue(self.videos_input_name in out_dict_with_video) self.assertEqual(len(out_dict_with_video[self.videos_input_name]), 5760) # Load with `fps` arg fps = 1 out_dict_with_video = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, fps=fps, ) self.assertTrue(self.videos_input_name in out_dict_with_video) self.assertEqual(len(out_dict_with_video[self.videos_input_name]), 14400) # Load with `fps` and `num_frames` args, should raise an error with self.assertRaises(ValueError): out_dict_with_video = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, fps=fps, num_frames=num_frames, ) # Load without any arg should load the whole video out_dict_with_video = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, ) self.assertTrue(self.videos_input_name in out_dict_with_video) self.assertEqual(len(out_dict_with_video[self.videos_input_name]), 432000) # Load video as a list of frames (i.e. images). NOTE: each frame should have same size # because we assume they come from one video messages[0][0]["content"][-1] = { "type": "video", "url": [ "https://www.ilankelman.org/stopsigns/australia.jpg", "https://www.ilankelman.org/stopsigns/australia.jpg", ], } out_dict_with_video = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, ) self.assertTrue(self.videos_input_name in out_dict_with_video) self.assertEqual(len(out_dict_with_video[self.videos_input_name]), 2904) @require_librosa @require_av @unittest.skip( "@raushan: librosa can'r decode this audio in CI runner, fix after adding moviepy or another decoder" ) def test_chat_template_audio_from_video(self): processor = self.get_processor() if processor.chat_template is None: self.skipTest("Processor has no chat template") signature = inspect.signature(processor.__call__) if "videos" not in {*signature.parameters.keys()} or ( signature.parameters.get("videos") is not None and signature.parameters["videos"].annotation == inspect._empty ): self.skipTest(f"{self.processor_class} does not support video inputs") if "feature_extractor" not in self.processor_class.attributes: self.skipTest(f"feature_extractor attribute not present in {self.processor_class}") video_file_path = hf_hub_download( repo_id="raushan-testing-hf/videos-test", filename="sample_demo_1.mp4", repo_type="dataset" ) messages = [ { "role": "user", "content": [ {"type": "video", "path": video_file_path}, {"type": "text", "text": "Which of these animals is making the sound?"}, ], }, { "role": "assistant", "content": [{"type": "text", "text": "It is a cow."}], }, { "role": "user", "content": [ {"type": "text", "text": "Tell me all about this animal."}, ], }, ] formatted_prompt = processor.apply_chat_template([messages], add_generation_prompt=True, tokenize=False) self.assertEqual(len(formatted_prompt), 1) # batch size=1 out_dict = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="np", load_audio_from_video=True, ) self.assertTrue(self.audio_input_name in out_dict) self.assertTrue(self.videos_input_name in out_dict) # should always have input_ids and attention_mask self.assertEqual(len(out_dict["input_ids"]), 1) # batch-size=1 self.assertEqual(len(out_dict["attention_mask"]), 1) # batch-size=1 self.assertEqual(len(out_dict[self.audio_input_name]), 1) # 1 audio in the conversation self.assertEqual(len(out_dict[self.videos_input_name]), 145912) # 1 video in the conversation
transformers/tests/models/qwen2_5_omni/test_processing_qwen2_5_omni.py/0
{ "file_path": "transformers/tests/models/qwen2_5_omni/test_processing_qwen2_5_omni.py", "repo_id": "transformers", "token_count": 10696 }
599
# Copyright 2023 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from transformers import RTDetrResNetConfig from transformers.testing_utils import require_torch from transformers.utils.import_utils import is_torch_available from ...test_backbone_common import BackboneTesterMixin from ...test_modeling_common import floats_tensor, ids_tensor if is_torch_available(): from transformers import RTDetrResNetBackbone class RTDetrResNetModelTester: def __init__( self, parent, batch_size=3, image_size=32, num_channels=3, embeddings_size=10, hidden_sizes=[10, 20, 30, 40], depths=[1, 1, 2, 1], is_training=True, use_labels=True, hidden_act="relu", num_labels=3, scope=None, out_features=["stage2", "stage3", "stage4"], out_indices=[2, 3, 4], ): self.parent = parent self.batch_size = batch_size self.image_size = image_size self.num_channels = num_channels self.embeddings_size = embeddings_size self.hidden_sizes = hidden_sizes self.depths = depths self.is_training = is_training self.use_labels = use_labels self.hidden_act = hidden_act self.num_labels = num_labels self.scope = scope self.num_stages = len(hidden_sizes) self.out_features = out_features self.out_indices = out_indices def prepare_config_and_inputs(self): pixel_values = floats_tensor([self.batch_size, self.num_channels, self.image_size, self.image_size]) labels = None if self.use_labels: labels = ids_tensor([self.batch_size], self.num_labels) config = self.get_config() return config, pixel_values, labels def get_config(self): return RTDetrResNetConfig( num_channels=self.num_channels, embeddings_size=self.embeddings_size, hidden_sizes=self.hidden_sizes, depths=self.depths, hidden_act=self.hidden_act, num_labels=self.num_labels, out_features=self.out_features, out_indices=self.out_indices, ) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, pixel_values, labels = config_and_inputs inputs_dict = {"pixel_values": pixel_values} return config, inputs_dict @require_torch class RTDetrResNetBackboneTest(BackboneTesterMixin, unittest.TestCase): all_model_classes = (RTDetrResNetBackbone,) if is_torch_available() else () has_attentions = False config_class = RTDetrResNetConfig def setUp(self): self.model_tester = RTDetrResNetModelTester(self)
transformers/tests/models/rt_detr/test_modeling_rt_detr_resnet.py/0
{ "file_path": "transformers/tests/models/rt_detr/test_modeling_rt_detr_resnet.py", "repo_id": "transformers", "token_count": 1387 }
600
# Copyright 2024 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import numpy as np from datasets import load_dataset from transformers.testing_utils import require_torch, require_vision, slow from transformers.utils import is_torch_available, is_vision_available from ...test_image_processing_common import ImageProcessingTestMixin, prepare_image_inputs if is_torch_available(): import torch from transformers.models.seggpt.modeling_seggpt import SegGptImageSegmentationOutput if is_vision_available(): from PIL import Image from transformers import SegGptImageProcessor class SegGptImageProcessingTester: def __init__( self, parent, batch_size=7, num_channels=3, image_size=18, min_resolution=30, max_resolution=400, do_resize=True, size=None, do_normalize=True, image_mean=[0.5, 0.5, 0.5], image_std=[0.5, 0.5, 0.5], ): size = size if size is not None else {"height": 18, "width": 18} self.parent = parent self.batch_size = batch_size self.num_channels = num_channels self.image_size = image_size self.min_resolution = min_resolution self.max_resolution = max_resolution self.do_resize = do_resize self.size = size self.do_normalize = do_normalize self.image_mean = image_mean self.image_std = image_std def prepare_image_processor_dict(self): return { "image_mean": self.image_mean, "image_std": self.image_std, "do_normalize": self.do_normalize, "do_resize": self.do_resize, "size": self.size, } def expected_output_image_shape(self, images): return self.num_channels, self.size["height"], self.size["width"] def expected_post_processed_shape(self): return self.size["height"] // 2, self.size["width"] def get_fake_image_segmentation_output(self): torch.manual_seed(42) return SegGptImageSegmentationOutput( pred_masks=torch.rand(self.batch_size, self.num_channels, self.size["height"], self.size["width"]) ) def prepare_image_inputs(self, equal_resolution=False, numpify=False, torchify=False): return prepare_image_inputs( batch_size=self.batch_size, num_channels=self.num_channels, min_resolution=self.min_resolution, max_resolution=self.max_resolution, equal_resolution=equal_resolution, numpify=numpify, torchify=torchify, ) def prepare_mask(): ds = load_dataset("EduardoPacheco/seggpt-example-data")["train"] return ds[0]["mask"].convert("L") def prepare_img(): ds = load_dataset("EduardoPacheco/seggpt-example-data")["train"] images = [image.convert("RGB") for image in ds["image"]] masks = [image.convert("RGB") for image in ds["mask"]] return images, masks @require_torch @require_vision class SegGptImageProcessingTest(ImageProcessingTestMixin, unittest.TestCase): image_processing_class = SegGptImageProcessor if is_vision_available() else None def setUp(self): super().setUp() self.image_processor_tester = SegGptImageProcessingTester(self) @property def image_processor_dict(self): return self.image_processor_tester.prepare_image_processor_dict() def test_image_processor_properties(self): image_processing = self.image_processing_class(**self.image_processor_dict) self.assertTrue(hasattr(image_processing, "image_mean")) self.assertTrue(hasattr(image_processing, "image_std")) self.assertTrue(hasattr(image_processing, "do_normalize")) self.assertTrue(hasattr(image_processing, "do_resize")) self.assertTrue(hasattr(image_processing, "size")) def test_image_processor_from_dict_with_kwargs(self): image_processor = self.image_processing_class.from_dict(self.image_processor_dict) self.assertEqual(image_processor.size, {"height": 18, "width": 18}) image_processor = self.image_processing_class.from_dict(self.image_processor_dict, size=42) self.assertEqual(image_processor.size, {"height": 42, "width": 42}) def test_image_processor_palette(self): num_labels = 3 image_processing = self.image_processing_class(**self.image_processor_dict) palette = image_processing.get_palette(num_labels) self.assertEqual(len(palette), num_labels + 1) self.assertEqual(palette[0], (0, 0, 0)) def test_mask_equivalence(self): image_processor = SegGptImageProcessor() mask_binary = prepare_mask() mask_rgb = mask_binary.convert("RGB") inputs_binary = image_processor(images=None, prompt_masks=mask_binary, return_tensors="pt") inputs_rgb = image_processor(images=None, prompt_masks=mask_rgb, return_tensors="pt", do_convert_rgb=False) self.assertTrue((inputs_binary["prompt_masks"] == inputs_rgb["prompt_masks"]).all().item()) def test_mask_to_rgb(self): image_processing = self.image_processing_class(**self.image_processor_dict) mask = prepare_mask() mask = np.array(mask) mask = (mask > 0).astype(np.uint8) def check_two_colors(image, color1=(0, 0, 0), color2=(255, 255, 255)): pixels = image.transpose(1, 2, 0).reshape(-1, 3) unique_colors = np.unique(pixels, axis=0) if len(unique_colors) == 2 and (color1 in unique_colors) and (color2 in unique_colors): return True else: return False num_labels = 1 palette = image_processing.get_palette(num_labels) # Should only duplicate repeat class indices map, hence only (0,0,0) and (1,1,1) mask_duplicated = image_processing.mask_to_rgb(mask) # Mask using palette, since only 1 class is present we have colors (0,0,0) and (255,255,255) mask_painted = image_processing.mask_to_rgb(mask, palette=palette) self.assertTrue(check_two_colors(mask_duplicated, color2=(1, 1, 1))) self.assertTrue(check_two_colors(mask_painted, color2=(255, 255, 255))) def test_post_processing_semantic_segmentation(self): image_processor = self.image_processing_class(**self.image_processor_dict) outputs = self.image_processor_tester.get_fake_image_segmentation_output() post_processed = image_processor.post_process_semantic_segmentation(outputs) self.assertEqual(len(post_processed), self.image_processor_tester.batch_size) expected_semantic_map_shape = self.image_processor_tester.expected_post_processed_shape() self.assertEqual(post_processed[0].shape, expected_semantic_map_shape) @slow def test_pixel_values(self): images, masks = prepare_img() input_image = images[1] prompt_image = images[0] prompt_mask = masks[0] image_processor = SegGptImageProcessor.from_pretrained("BAAI/seggpt-vit-large") inputs = image_processor( images=input_image, prompt_images=prompt_image, prompt_masks=prompt_mask, return_tensors="pt", do_convert_rgb=False, ) # Verify pixel values expected_prompt_pixel_values = torch.tensor( [ [[-0.6965, -0.6965, -0.6965], [-0.6965, -0.6965, -0.6965], [-0.6965, -0.6965, -0.6965]], [[1.6583, 1.6583, 1.6583], [1.6583, 1.6583, 1.6583], [1.6583, 1.6583, 1.6583]], [[2.3088, 2.3088, 2.3088], [2.3088, 2.3088, 2.3088], [2.3088, 2.3088, 2.3088]], ] ) expected_pixel_values = torch.tensor( [ [[1.6324, 1.6153, 1.5810], [1.6153, 1.5982, 1.5810], [1.5810, 1.5639, 1.5639]], [[1.2731, 1.2556, 1.2206], [1.2556, 1.2381, 1.2031], [1.2206, 1.2031, 1.1681]], [[1.6465, 1.6465, 1.6465], [1.6465, 1.6465, 1.6465], [1.6291, 1.6291, 1.6291]], ] ) expected_prompt_masks = torch.tensor( [ [[-2.1179, -2.1179, -2.1179], [-2.1179, -2.1179, -2.1179], [-2.1179, -2.1179, -2.1179]], [[-2.0357, -2.0357, -2.0357], [-2.0357, -2.0357, -2.0357], [-2.0357, -2.0357, -2.0357]], [[-1.8044, -1.8044, -1.8044], [-1.8044, -1.8044, -1.8044], [-1.8044, -1.8044, -1.8044]], ] ) torch.testing.assert_close(inputs.pixel_values[0, :, :3, :3], expected_pixel_values, rtol=1e-4, atol=1e-4) torch.testing.assert_close( inputs.prompt_pixel_values[0, :, :3, :3], expected_prompt_pixel_values, rtol=1e-4, atol=1e-4 ) torch.testing.assert_close(inputs.prompt_masks[0, :, :3, :3], expected_prompt_masks, rtol=1e-4, atol=1e-4) def test_prompt_mask_equivalence(self): image_processor = self.image_processing_class(**self.image_processor_dict) image_size = self.image_processor_tester.image_size # Single Mask Examples expected_single_shape = [1, 3, image_size, image_size] # Single Semantic Map (2D) image_np_2d = np.ones((image_size, image_size)) image_pt_2d = torch.ones((image_size, image_size)) image_pil_2d = Image.fromarray(image_np_2d) inputs_np_2d = image_processor(images=None, prompt_masks=image_np_2d, return_tensors="pt") inputs_pt_2d = image_processor(images=None, prompt_masks=image_pt_2d, return_tensors="pt") inputs_pil_2d = image_processor(images=None, prompt_masks=image_pil_2d, return_tensors="pt") self.assertTrue((inputs_np_2d["prompt_masks"] == inputs_pt_2d["prompt_masks"]).all().item()) self.assertTrue((inputs_np_2d["prompt_masks"] == inputs_pil_2d["prompt_masks"]).all().item()) self.assertEqual(list(inputs_np_2d["prompt_masks"].shape), expected_single_shape) # Single RGB Images (3D) image_np_3d = np.ones((3, image_size, image_size)) image_pt_3d = torch.ones((3, image_size, image_size)) image_pil_3d = Image.fromarray(image_np_3d.transpose(1, 2, 0).astype(np.uint8)) inputs_np_3d = image_processor( images=None, prompt_masks=image_np_3d, return_tensors="pt", do_convert_rgb=False ) inputs_pt_3d = image_processor( images=None, prompt_masks=image_pt_3d, return_tensors="pt", do_convert_rgb=False ) inputs_pil_3d = image_processor( images=None, prompt_masks=image_pil_3d, return_tensors="pt", do_convert_rgb=False ) self.assertTrue((inputs_np_3d["prompt_masks"] == inputs_pt_3d["prompt_masks"]).all().item()) self.assertTrue((inputs_np_3d["prompt_masks"] == inputs_pil_3d["prompt_masks"]).all().item()) self.assertEqual(list(inputs_np_3d["prompt_masks"].shape), expected_single_shape) # Batched Examples expected_batched_shape = [2, 3, image_size, image_size] # Batched Semantic Maps (3D) image_np_2d_batched = np.ones((2, image_size, image_size)) image_pt_2d_batched = torch.ones((2, image_size, image_size)) inputs_np_2d_batched = image_processor(images=None, prompt_masks=image_np_2d_batched, return_tensors="pt") inputs_pt_2d_batched = image_processor(images=None, prompt_masks=image_pt_2d_batched, return_tensors="pt") self.assertTrue((inputs_np_2d_batched["prompt_masks"] == inputs_pt_2d_batched["prompt_masks"]).all().item()) self.assertEqual(list(inputs_np_2d_batched["prompt_masks"].shape), expected_batched_shape) # Batched RGB images image_np_4d = np.ones((2, 3, image_size, image_size)) image_pt_4d = torch.ones((2, 3, image_size, image_size)) inputs_np_4d = image_processor( images=None, prompt_masks=image_np_4d, return_tensors="pt", do_convert_rgb=False ) inputs_pt_4d = image_processor( images=None, prompt_masks=image_pt_4d, return_tensors="pt", do_convert_rgb=False ) self.assertTrue((inputs_np_4d["prompt_masks"] == inputs_pt_4d["prompt_masks"]).all().item()) self.assertEqual(list(inputs_np_4d["prompt_masks"].shape), expected_batched_shape) # Comparing Single and Batched Examples self.assertTrue((inputs_np_2d["prompt_masks"][0] == inputs_np_3d["prompt_masks"][0]).all().item()) self.assertTrue((inputs_np_2d_batched["prompt_masks"][0] == inputs_np_2d["prompt_masks"][0]).all().item()) self.assertTrue((inputs_np_2d_batched["prompt_masks"][0] == inputs_np_3d["prompt_masks"][0]).all().item()) self.assertTrue((inputs_np_2d_batched["prompt_masks"][0] == inputs_np_4d["prompt_masks"][0]).all().item()) self.assertTrue((inputs_np_2d_batched["prompt_masks"][0] == inputs_np_3d["prompt_masks"][0]).all().item())
transformers/tests/models/seggpt/test_image_processing_seggpt.py/0
{ "file_path": "transformers/tests/models/seggpt/test_image_processing_seggpt.py", "repo_id": "transformers", "token_count": 6058 }
601
# Copyright 2022 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch SpeechT5 model.""" import copy import inspect import tempfile import unittest from transformers import SpeechT5Config, SpeechT5HifiGanConfig from transformers.testing_utils import ( is_flaky, is_torch_available, require_deterministic_for_xpu, require_sentencepiece, require_tokenizers, require_torch, slow, torch_device, ) from transformers.trainer_utils import set_seed from transformers.utils import cached_property from ...generation.test_utils import GenerationTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ( ModelTesterMixin, _config_zero_init, floats_tensor, ids_tensor, random_attention_mask, ) from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from transformers import ( SpeechT5ForSpeechToSpeech, SpeechT5ForSpeechToText, SpeechT5ForTextToSpeech, SpeechT5HifiGan, SpeechT5Model, SpeechT5Processor, ) def prepare_inputs_dict( config, input_ids=None, input_values=None, decoder_input_ids=None, decoder_input_values=None, attention_mask=None, decoder_attention_mask=None, head_mask=None, decoder_head_mask=None, cross_attn_head_mask=None, ): if input_ids is not None: encoder_dict = {"input_ids": input_ids} else: encoder_dict = {"input_values": input_values} if decoder_input_ids is not None: decoder_dict = {"decoder_input_ids": decoder_input_ids} else: decoder_dict = {"decoder_input_values": decoder_input_values} if head_mask is None: head_mask = torch.ones(config.encoder_layers, config.encoder_attention_heads, device=torch_device) if decoder_head_mask is None: decoder_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) if cross_attn_head_mask is None: cross_attn_head_mask = torch.ones(config.decoder_layers, config.decoder_attention_heads, device=torch_device) return { **encoder_dict, **decoder_dict, "attention_mask": attention_mask, "decoder_attention_mask": decoder_attention_mask, "head_mask": head_mask, "decoder_head_mask": decoder_head_mask, "cross_attn_head_mask": cross_attn_head_mask, } @require_torch class SpeechT5ModelTester: def __init__( self, parent, batch_size=13, seq_length=7, is_training=False, vocab_size=81, hidden_size=24, num_hidden_layers=2, num_attention_heads=2, intermediate_size=4, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.vocab_size = vocab_size self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size def prepare_config_and_inputs(self): input_values = floats_tensor([self.batch_size, self.seq_length, self.hidden_size], scale=1.0) attention_mask = random_attention_mask([self.batch_size, self.seq_length]) decoder_input_values = floats_tensor([self.batch_size, self.seq_length, self.hidden_size], scale=1.0) decoder_attention_mask = random_attention_mask([self.batch_size, self.seq_length]) config = self.get_config() inputs_dict = prepare_inputs_dict( config, input_values=input_values, decoder_input_values=decoder_input_values, attention_mask=attention_mask, decoder_attention_mask=decoder_attention_mask, ) return config, inputs_dict def prepare_config_and_inputs_for_common(self): config, inputs_dict = self.prepare_config_and_inputs() return config, inputs_dict def get_config(self): return SpeechT5Config( vocab_size=self.vocab_size, hidden_size=self.hidden_size, encoder_layers=self.num_hidden_layers, decoder_layers=self.num_hidden_layers, encoder_attention_heads=self.num_attention_heads, decoder_attention_heads=self.num_attention_heads, encoder_ffn_dim=self.intermediate_size, decoder_ffn_dim=self.intermediate_size, ) def create_and_check_model_forward(self, config, inputs_dict): model = SpeechT5Model(config=config).to(torch_device).eval() input_values = inputs_dict["input_values"] attention_mask = inputs_dict["attention_mask"] decoder_input_values = inputs_dict["decoder_input_values"] result = model(input_values, attention_mask=attention_mask, decoder_input_values=decoder_input_values) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) @require_torch class SpeechT5ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (SpeechT5Model,) if is_torch_available() else () pipeline_model_mapping = ( {"automatic-speech-recognition": SpeechT5ForSpeechToText, "feature-extraction": SpeechT5Model} if is_torch_available() else {} ) is_encoder_decoder = True test_pruning = False test_headmasking = False test_resize_embeddings = False def setUp(self): self.model_tester = SpeechT5ModelTester(self) self.config_tester = ConfigTester(self, config_class=SpeechT5Config, hidden_size=37) def test_config(self): self.config_tester.run_common_tests() def test_model_forward(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model_forward(*config_and_inputs) def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] expected_arg_names = [ "input_values", "attention_mask", "decoder_input_values", "decoder_attention_mask", ] expected_arg_names.extend( ["head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs"] if "head_mask" and "decoder_head_mask" and "cross_attn_head_mask" in arg_names else ["encoder_outputs"] ) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) @unittest.skip(reason="Model has no input_embeds") def test_inputs_embeds(self): pass @unittest.skip(reason="Model has no input_embeds") def test_model_get_set_embeddings(self): pass @unittest.skip(reason="Decoder cannot keep gradients") def test_retain_grad_hidden_states_attentions(self): pass @slow @unittest.skip(reason="Model does not have decoder_input_ids") def test_torchscript_output_attentions(self): pass @slow @unittest.skip(reason="Model does not have decoder_input_ids") def test_torchscript_output_hidden_state(self): pass @slow @unittest.skip(reason="Model does not have decoder_input_ids") def test_torchscript_simple(self): pass @require_torch class SpeechT5ForSpeechToTextTester: def __init__( self, parent, batch_size=13, encoder_seq_length=1024, # speech is longer decoder_seq_length=7, is_training=False, hidden_size=24, num_hidden_layers=2, num_attention_heads=2, intermediate_size=4, conv_dim=(32, 32, 32), conv_stride=(4, 4, 4), conv_kernel=(8, 8, 8), conv_bias=False, num_conv_pos_embeddings=16, num_conv_pos_embedding_groups=2, vocab_size=81, ): self.parent = parent self.batch_size = batch_size self.encoder_seq_length = encoder_seq_length self.decoder_seq_length = decoder_seq_length self.is_training = is_training self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.conv_dim = conv_dim self.conv_stride = conv_stride self.conv_kernel = conv_kernel self.conv_bias = conv_bias self.num_conv_pos_embeddings = num_conv_pos_embeddings self.num_conv_pos_embedding_groups = num_conv_pos_embedding_groups self.vocab_size = vocab_size def prepare_config_and_inputs(self): input_values = floats_tensor([self.batch_size, self.encoder_seq_length], scale=1.0) attention_mask = random_attention_mask([self.batch_size, self.encoder_seq_length]) decoder_input_ids = ids_tensor([self.batch_size, self.decoder_seq_length], self.vocab_size).clamp(2) decoder_attention_mask = random_attention_mask([self.batch_size, self.decoder_seq_length]) config = self.get_config() inputs_dict = prepare_inputs_dict( config, input_values=input_values, decoder_input_ids=decoder_input_ids, attention_mask=attention_mask, decoder_attention_mask=decoder_attention_mask, ) return config, inputs_dict def prepare_config_and_inputs_for_common(self): config, inputs_dict = self.prepare_config_and_inputs() return config, inputs_dict def get_config(self): return SpeechT5Config( hidden_size=self.hidden_size, encoder_layers=self.num_hidden_layers, decoder_layers=self.num_hidden_layers, encoder_attention_heads=self.num_attention_heads, decoder_attention_heads=self.num_attention_heads, encoder_ffn_dim=self.intermediate_size, decoder_ffn_dim=self.intermediate_size, conv_dim=self.conv_dim, conv_stride=self.conv_stride, conv_kernel=self.conv_kernel, conv_bias=self.conv_bias, num_conv_pos_embeddings=self.num_conv_pos_embeddings, num_conv_pos_embedding_groups=self.num_conv_pos_embedding_groups, vocab_size=self.vocab_size, ) def get_subsampled_output_lengths(self, input_lengths): """ Computes the output length of the convolutional layers """ for stride in self.conv_stride: input_lengths = (input_lengths // stride) - 1 return input_lengths def create_and_check_model_forward(self, config, inputs_dict): model = SpeechT5ForSpeechToText(config=config).to(torch_device).eval() input_values = inputs_dict["input_values"] attention_mask = inputs_dict["attention_mask"] decoder_input_ids = inputs_dict["decoder_input_ids"] result = model(input_values, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.decoder_seq_length, self.vocab_size)) def create_and_check_decoder_model_past_large_inputs(self, config, inputs_dict): model = SpeechT5ForSpeechToText(config=config).get_decoder().to(torch_device).eval() input_ids = inputs_dict["decoder_input_ids"] attention_mask = inputs_dict["decoder_attention_mask"] # first forward pass outputs = model(input_ids, attention_mask=attention_mask, use_cache=True) output, past_key_values = outputs.to_tuple() # create hypothetical multiple next token and extent to next_input_ids next_tokens = ids_tensor((self.batch_size, 3), config.vocab_size).clamp(2) next_attn_mask = ids_tensor((self.batch_size, 3), 2) # append to next input_ids and next_input_ids = torch.cat([input_ids, next_tokens], dim=-1) next_attention_mask = torch.cat([attention_mask, next_attn_mask], dim=-1) output_from_no_past = model(next_input_ids, attention_mask=next_attention_mask)["last_hidden_state"] output_from_past = model( next_tokens, attention_mask=next_attention_mask, past_key_values=past_key_values, use_cache=True )["last_hidden_state"] # select random slice random_slice_idx = ids_tensor((1,), output_from_past.shape[-1]).item() output_from_no_past_slice = output_from_no_past[:, -3:, random_slice_idx].detach() output_from_past_slice = output_from_past[:, :, random_slice_idx].detach() self.parent.assertTrue(output_from_past_slice.shape[1] == next_tokens.shape[1]) # test that outputs are equal for slice self.parent.assertTrue(torch.allclose(output_from_past_slice, output_from_no_past_slice, atol=1e-2)) @require_torch class SpeechT5ForSpeechToTextTest(ModelTesterMixin, unittest.TestCase, GenerationTesterMixin): all_model_classes = (SpeechT5ForSpeechToText,) if is_torch_available() else () is_encoder_decoder = True test_pruning = False test_headmasking = False def setUp(self): self.model_tester = SpeechT5ForSpeechToTextTester(self) self.config_tester = ConfigTester(self, config_class=SpeechT5Config, hidden_size=37) def test_config(self): self.config_tester.run_common_tests() def test_save_load_strict(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs() for model_class in self.all_model_classes: model = model_class(config) with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) model2, info = model_class.from_pretrained(tmpdirname, output_loading_info=True) self.assertEqual(info["missing_keys"], []) def test_model_forward(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model_forward(*config_and_inputs) def test_decoder_model_past_with_large_inputs(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_decoder_model_past_large_inputs(*config_and_inputs) @unittest.skip(reason="skipped because of dropout") def test_batching_equivalence(self): pass def test_attention_outputs(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True seq_len = getattr(self.model_tester, "seq_length", None) decoder_seq_length = getattr(self.model_tester, "decoder_seq_length", seq_len) encoder_seq_length = getattr(self.model_tester, "encoder_seq_length", seq_len) decoder_key_length = getattr(self.model_tester, "decoder_key_length", decoder_seq_length) encoder_key_length = getattr(self.model_tester, "key_length", encoder_seq_length) for model_class in self.all_model_classes: inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = False config.return_dict = True model = model_class._from_config(config, attn_implementation="eager") config = model.config model.to(torch_device) model.eval() subsampled_encoder_seq_length = model.speecht5.encoder.prenet._get_feat_extract_output_lengths( encoder_seq_length ) subsampled_encoder_key_length = model.speecht5.encoder.prenet._get_feat_extract_output_lengths( encoder_key_length ) with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.encoder_attentions if config.is_encoder_decoder else outputs.attentions self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) # check that output_attentions also work using config del inputs_dict["output_attentions"] config.output_attentions = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.encoder_attentions if config.is_encoder_decoder else outputs.attentions self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, subsampled_encoder_seq_length, subsampled_encoder_key_length], ) out_len = len(outputs) correct_outlen = 5 # loss is at first position if "labels" in inputs_dict: correct_outlen += 1 # loss is added to beginning if "past_key_values" in outputs: correct_outlen += 1 # past_key_values have been returned self.assertEqual(out_len, correct_outlen) # decoder attentions decoder_attentions = outputs.decoder_attentions self.assertIsInstance(decoder_attentions, (list, tuple)) self.assertEqual(len(decoder_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(decoder_attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, decoder_seq_length, decoder_key_length], ) # cross attentions cross_attentions = outputs.cross_attentions self.assertIsInstance(cross_attentions, (list, tuple)) self.assertEqual(len(cross_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(cross_attentions[0].shape[-3:]), [ self.model_tester.num_attention_heads, decoder_seq_length, subsampled_encoder_key_length, ], ) # Check attention is always last and order is fine inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) added_hidden_states = 2 self.assertEqual(out_len + added_hidden_states, len(outputs)) self_attentions = outputs.encoder_attentions if config.is_encoder_decoder else outputs.attentions self.assertEqual(len(self_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(self_attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, subsampled_encoder_seq_length, subsampled_encoder_key_length], ) def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] expected_arg_names = [ "input_values", "attention_mask", "decoder_input_ids", "decoder_attention_mask", ] expected_arg_names.extend( ["head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs"] if "head_mask" and "decoder_head_mask" and "cross_attn_head_mask" in arg_names else ["encoder_outputs"] ) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) def test_hidden_states_output(self): def check_hidden_states_output(inputs_dict, config, model_class): model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) hidden_states = outputs.encoder_hidden_states if config.is_encoder_decoder else outputs.hidden_states expected_num_layers = getattr( self.model_tester, "expected_num_hidden_layers", self.model_tester.num_hidden_layers + 1 ) self.assertEqual(len(hidden_states), expected_num_layers) if hasattr(self.model_tester, "encoder_seq_length"): seq_length = self.model_tester.encoder_seq_length else: seq_length = self.model_tester.seq_length subsampled_seq_length = model.speecht5.encoder.prenet._get_feat_extract_output_lengths(seq_length) self.assertListEqual( list(hidden_states[0].shape[-2:]), [subsampled_seq_length, self.model_tester.hidden_size], ) if config.is_encoder_decoder: hidden_states = outputs.decoder_hidden_states self.assertIsInstance(hidden_states, (list, tuple)) self.assertEqual(len(hidden_states), expected_num_layers) seq_len = getattr(self.model_tester, "seq_length", None) decoder_seq_length = getattr(self.model_tester, "decoder_seq_length", seq_len) self.assertListEqual( list(hidden_states[0].shape[-2:]), [decoder_seq_length, self.model_tester.hidden_size], ) config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: inputs_dict["output_hidden_states"] = True check_hidden_states_output(inputs_dict, config, model_class) # check that output_hidden_states also work using config del inputs_dict["output_hidden_states"] config.output_hidden_states = True check_hidden_states_output(inputs_dict, config, model_class) def test_initialization(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() configs_no_init = _config_zero_init(config) for model_class in self.all_model_classes: model = model_class(config=configs_no_init) for name, param in model.named_parameters(): uniform_init_parms = [ "conv.weight", "conv.parametrizations.weight", "masked_spec_embed", "feature_projection.projection.weight", "feature_projection.projection.bias", ] if param.requires_grad: if any(x in name for x in uniform_init_parms): self.assertTrue( -1.0 <= ((param.data.mean() * 1e9).round() / 1e9).item() <= 1.0, msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) else: self.assertIn( ((param.data.mean() * 1e9).round() / 1e9).item(), [0.0, 1.0], msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) # this model has no inputs_embeds @unittest.skip(reason="Model has no input_embeds") def test_inputs_embeds(self): pass def test_resize_embeddings_untied(self): original_config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() if not self.test_resize_embeddings: self.skipTest(reason="test_resize_embeddings is set to False") original_config.tie_word_embeddings = False # if model cannot untied embeddings -> leave test if original_config.tie_word_embeddings: self.skipTest(reason="Model cannot untie embeddings") for model_class in self.all_model_classes: config = copy.deepcopy(original_config) model = model_class(config).to(torch_device) # if no output embeddings -> leave test if model.get_output_embeddings() is None: continue # Check that resizing the token embeddings with a larger vocab size increases the model's vocab size model_vocab_size = config.vocab_size model.resize_token_embeddings(model_vocab_size + 10) self.assertEqual(model.config.vocab_size, model_vocab_size + 10) output_embeds = model.get_output_embeddings() self.assertEqual(output_embeds.weight.shape[0], model_vocab_size + 10) # Check bias if present if output_embeds.bias is not None: self.assertEqual(output_embeds.bias.shape[0], model_vocab_size + 10) # Check that the model can still do a forward pass successfully (every parameter should be resized) model(**self._prepare_for_class(inputs_dict, model_class)) # Check that resizing the token embeddings with a smaller vocab size decreases the model's vocab size model.resize_token_embeddings(model_vocab_size - 15) self.assertEqual(model.config.vocab_size, model_vocab_size - 15) # Check that it actually resizes the embeddings matrix output_embeds = model.get_output_embeddings() self.assertEqual(output_embeds.weight.shape[0], model_vocab_size - 15) # Check bias if present if output_embeds.bias is not None: self.assertEqual(output_embeds.bias.shape[0], model_vocab_size - 15) # Check that the model can still do a forward pass successfully (every parameter should be resized) if "decoder_input_ids" in inputs_dict: inputs_dict["decoder_input_ids"].clamp_(max=model_vocab_size - 15 - 1) # Check that the model can still do a forward pass successfully (every parameter should be resized) model(**self._prepare_for_class(inputs_dict, model_class)) def test_resize_tokens_embeddings(self): original_config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() if not self.test_resize_embeddings: self.skipTest(reason="test_resize_embeddings is set to False") for model_class in self.all_model_classes: config = copy.deepcopy(original_config) model = model_class(config) model.to(torch_device) if self.model_tester.is_training is False: model.eval() model_vocab_size = config.vocab_size # Retrieve the embeddings and clone theme model_embed = model.resize_token_embeddings(model_vocab_size) cloned_embeddings = model_embed.weight.clone() # Check that resizing the token embeddings with a larger vocab size increases the model's vocab size model_embed = model.resize_token_embeddings(model_vocab_size + 10) self.assertEqual(model.config.vocab_size, model_vocab_size + 10) # Check that it actually resizes the embeddings matrix self.assertEqual(model_embed.weight.shape[0], cloned_embeddings.shape[0] + 10) # Check that the model can still do a forward pass successfully (every parameter should be resized) model(**self._prepare_for_class(inputs_dict, model_class)) # Check that resizing the token embeddings with a smaller vocab size decreases the model's vocab size model_embed = model.resize_token_embeddings(model_vocab_size - 15) self.assertEqual(model.config.vocab_size, model_vocab_size - 15) # Check that it actually resizes the embeddings matrix self.assertEqual(model_embed.weight.shape[0], cloned_embeddings.shape[0] - 15) # make sure that decoder_input_ids are resized if "decoder_input_ids" in inputs_dict: inputs_dict["decoder_input_ids"].clamp_(max=model_vocab_size - 15 - 1) model(**self._prepare_for_class(inputs_dict, model_class)) # Check that adding and removing tokens has not modified the first part of the embedding matrix. models_equal = True for p1, p2 in zip(cloned_embeddings, model_embed.weight): if p1.data.ne(p2.data).sum() > 0: models_equal = False self.assertTrue(models_equal) @unittest.skip(reason="Decoder cannot keep gradients") def test_retain_grad_hidden_states_attentions(self): # decoder cannot keep gradients pass @unittest.skip(reason="Training is not supported yet") def test_training(self): pass @unittest.skip(reason="Training is not supported yet") def test_training_gradient_checkpointing(self): pass @unittest.skip( reason="This architecture seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant(self): pass @unittest.skip( reason="This architecture seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant_false(self): pass @is_flaky(max_attempts=5, description="Flaky for some input configurations.") def test_past_key_values_format(self): super().test_past_key_values_format() # overwrite from test_modeling_common def _mock_init_weights(self, module): if hasattr(module, "weight") and module.weight is not None: module.weight.data.fill_(3) if hasattr(module, "weight_g") and module.weight_g is not None: module.weight_g.data.fill_(3) if hasattr(module, "weight_v") and module.weight_v is not None: module.weight_v.data.fill_(3) if hasattr(module, "bias") and module.bias is not None: module.bias.data.fill_(3) if hasattr(module, "masked_spec_embed") and module.masked_spec_embed is not None: module.masked_spec_embed.data.fill_(3) @unittest.skip(reason="Temporarily broken") # TODO (joao, eustache): have a look at this test def test_generate_without_input_ids(self): pass @unittest.skip(reason="Very flaky") # TODO (joao, eustache): have a look at this test def test_generate_continue_from_past_key_values(self): pass @require_torch @require_sentencepiece @require_tokenizers @slow class SpeechT5ForSpeechToTextIntegrationTests(unittest.TestCase): @cached_property def default_processor(self): return SpeechT5Processor.from_pretrained("microsoft/speecht5_asr") def _load_datasamples(self, num_samples): from datasets import load_dataset ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") # automatic decoding with librispeech speech_samples = ds.sort("id")[:num_samples]["audio"] return [x["array"] for x in speech_samples] def test_generation_librispeech(self): model = SpeechT5ForSpeechToText.from_pretrained("microsoft/speecht5_asr") model.to(torch_device) processor = self.default_processor input_speech = self._load_datasamples(1) input_values = processor(audio=input_speech, return_tensors="pt").input_values.to(torch_device) generated_ids = model.generate(input_values) generated_transcript = processor.batch_decode(generated_ids, skip_special_tokens=True) EXPECTED_TRANSCRIPTIONS = [ "mister quilter is the apostle of the middle classes and we are glad to welcome his gospel" ] self.assertListEqual(generated_transcript, EXPECTED_TRANSCRIPTIONS) def test_generation_librispeech_batched(self): model = SpeechT5ForSpeechToText.from_pretrained("microsoft/speecht5_asr") model.to(torch_device) processor = self.default_processor input_speech = self._load_datasamples(4) inputs = processor(audio=input_speech, return_tensors="pt", padding=True) input_values = inputs.input_values.to(torch_device) attention_mask = inputs.attention_mask.to(torch_device) generated_ids = model.generate(input_values, attention_mask=attention_mask) generated_transcripts = processor.batch_decode(generated_ids, skip_special_tokens=True) EXPECTED_TRANSCRIPTIONS = [ "mister quilter is the apostle of the middle classes and we are glad to welcome his gospel", "nor is mister quilter's manner less interesting than his matter", "he tells us that at this festive season of the year with christmas and rosebeaf looming before us" " similars drawn from eating and its results occur most readily to the mind", "he has grave doubts whether sir frederick latin's work is really greek after all and can discover in it" " but little of rocky ithica", ] self.assertListEqual(generated_transcripts, EXPECTED_TRANSCRIPTIONS) @require_torch class SpeechT5ForTextToSpeechTester: def __init__( self, parent, batch_size=13, encoder_seq_length=7, decoder_seq_length=1024, # speech is longer is_training=False, hidden_size=24, num_hidden_layers=2, num_attention_heads=2, intermediate_size=4, vocab_size=81, num_mel_bins=20, reduction_factor=2, speech_decoder_postnet_layers=2, speech_decoder_postnet_units=32, speech_decoder_prenet_units=32, ): self.parent = parent self.batch_size = batch_size self.encoder_seq_length = encoder_seq_length self.decoder_seq_length = decoder_seq_length self.is_training = is_training self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.vocab_size = vocab_size self.num_mel_bins = num_mel_bins self.reduction_factor = reduction_factor self.speech_decoder_postnet_layers = speech_decoder_postnet_layers self.speech_decoder_postnet_units = speech_decoder_postnet_units self.speech_decoder_prenet_units = speech_decoder_prenet_units def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.encoder_seq_length], self.vocab_size).clamp(2) attention_mask = random_attention_mask([self.batch_size, self.encoder_seq_length]) decoder_input_values = floats_tensor([self.batch_size, self.decoder_seq_length, self.num_mel_bins], scale=1.0) decoder_attention_mask = random_attention_mask([self.batch_size, self.decoder_seq_length]) config = self.get_config() inputs_dict = prepare_inputs_dict( config, input_ids=input_ids, decoder_input_values=decoder_input_values, attention_mask=attention_mask, decoder_attention_mask=decoder_attention_mask, ) return config, inputs_dict def prepare_config_and_inputs_for_common(self): config, inputs_dict = self.prepare_config_and_inputs() return config, inputs_dict def get_config(self): return SpeechT5Config( hidden_size=self.hidden_size, encoder_layers=self.num_hidden_layers, decoder_layers=self.num_hidden_layers, encoder_attention_heads=self.num_attention_heads, decoder_attention_heads=self.num_attention_heads, encoder_ffn_dim=self.intermediate_size, decoder_ffn_dim=self.intermediate_size, vocab_size=self.vocab_size, num_mel_bins=self.num_mel_bins, reduction_factor=self.reduction_factor, speech_decoder_postnet_layers=self.speech_decoder_postnet_layers, speech_decoder_postnet_units=self.speech_decoder_postnet_units, speech_decoder_prenet_units=self.speech_decoder_prenet_units, ) def create_and_check_model_forward(self, config, inputs_dict): model = SpeechT5ForTextToSpeech(config=config).to(torch_device).eval() input_ids = inputs_dict["input_ids"] attention_mask = inputs_dict["attention_mask"] decoder_input_values = inputs_dict["decoder_input_values"] result = model(input_ids, attention_mask=attention_mask, decoder_input_values=decoder_input_values) self.parent.assertEqual( result.spectrogram.shape, (self.batch_size, self.decoder_seq_length * self.reduction_factor, self.num_mel_bins), ) @require_torch class SpeechT5ForTextToSpeechTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (SpeechT5ForTextToSpeech,) if is_torch_available() else () all_generative_model_classes = () is_encoder_decoder = True test_pruning = False test_headmasking = False def setUp(self): self.model_tester = SpeechT5ForTextToSpeechTester(self) self.config_tester = ConfigTester(self, config_class=SpeechT5Config, hidden_size=37) def test_config(self): self.config_tester.run_common_tests() def test_model_can_generate(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs() for model_class in self.all_model_classes: model = model_class(config) self.assertTrue(model.can_generate()) def test_save_load_strict(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs() for model_class in self.all_model_classes: model = model_class(config) with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) model2, info = model_class.from_pretrained(tmpdirname, output_loading_info=True) self.assertEqual(info["missing_keys"], []) def test_model_forward(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model_forward(*config_and_inputs) def test_model_forward_with_labels(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs() model = SpeechT5ForTextToSpeech(config=config).to(torch_device).eval() input_ids = inputs_dict["input_ids"] attention_mask = inputs_dict["attention_mask"] decoder_attention_mask = inputs_dict["decoder_attention_mask"] labels = inputs_dict["decoder_input_values"] result = model( input_ids, attention_mask=attention_mask, labels=labels, decoder_attention_mask=decoder_attention_mask ) self.assertEqual( result.spectrogram.shape, (self.model_tester.batch_size, self.model_tester.decoder_seq_length, self.model_tester.num_mel_bins), ) @unittest.skip(reason="Dropout is always present in SpeechT5SpeechDecoderPrenet") def test_decoder_model_past_with_large_inputs(self): pass @unittest.skip(reason="Dropout is always present in SpeechT5SpeechDecoderPrenet") def test_determinism(self): pass @unittest.skip(reason="skipped because there is always dropout in SpeechT5SpeechDecoderPrenet") def test_batching_equivalence(self): pass def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] expected_arg_names = [ "input_ids", "attention_mask", "decoder_input_values", "decoder_attention_mask", ] expected_arg_names.extend( ["head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs"] if "head_mask" and "decoder_head_mask" and "cross_attn_head_mask" in arg_names else ["encoder_outputs"] ) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) def test_initialization(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() configs_no_init = _config_zero_init(config) for model_class in self.all_model_classes: model = model_class(config=configs_no_init) for name, param in model.named_parameters(): uniform_init_parms = [ "conv.weight", ] if param.requires_grad: if any(x in name for x in uniform_init_parms): self.assertTrue( -1.0 <= ((param.data.mean() * 1e9).round() / 1e9).item() <= 1.0, msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) else: self.assertIn( ((param.data.mean() * 1e9).round() / 1e9).item(), [0.0, 1.0], msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) @unittest.skip(reason="Model has no inputs_embeds") def test_inputs_embeds(self): pass @unittest.skip(reason="Dropout is always present in SpeechT5SpeechDecoderPrenet") def test_model_outputs_equivalence(self): pass @unittest.skip(reason="Dropout is always present in SpeechT5SpeechDecoderPrenet") def test_save_load(self): pass @unittest.skip(reason="Decoder cannot keep gradients") def test_retain_grad_hidden_states_attentions(self): pass @slow @unittest.skip(reason="Model doesn't have decoder_input_ids") def test_torchscript_output_attentions(self): pass @slow @unittest.skip(reason="Model doesn't have decoder_input_ids") def test_torchscript_output_hidden_state(self): pass @slow @unittest.skip(reason="Model doesn't have decoder_input_ids") def test_torchscript_simple(self): # disabled because this model doesn't have decoder_input_ids pass @unittest.skip(reason="training is not supported yet") def test_training(self): pass @unittest.skip(reason="training is not supported yet") def test_training_gradient_checkpointing(self): pass @unittest.skip( reason="This architecture seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant(self): pass @unittest.skip( reason="This architecture seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant_false(self): pass # overwrite from test_modeling_common def _mock_init_weights(self, module): if hasattr(module, "weight") and module.weight is not None: module.weight.data.fill_(3) if hasattr(module, "weight_g") and module.weight_g is not None: module.weight_g.data.fill_(3) if hasattr(module, "weight_v") and module.weight_v is not None: module.weight_v.data.fill_(3) if hasattr(module, "bias") and module.bias is not None: module.bias.data.fill_(3) @require_torch @require_sentencepiece @require_tokenizers class SpeechT5ForTextToSpeechIntegrationTests(unittest.TestCase): @cached_property def default_model(self): return SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts").to(torch_device) @cached_property def default_processor(self): return SpeechT5Processor.from_pretrained("microsoft/speecht5_tts") @cached_property def default_vocoder(self): return SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan").to(torch_device) def test_generation(self): model = self.default_model processor = self.default_processor input_text = "Mister Quilter is the apostle of the middle classes, and we are glad to welcome his gospel." input_ids = processor(text=input_text, return_tensors="pt").input_ids.to(torch_device) speaker_embeddings = torch.zeros((1, 512), device=torch_device) # Generate speech and validate output dimensions set_seed(555) # Ensure deterministic behavior generated_speech = model.generate_speech(input_ids, speaker_embeddings=speaker_embeddings) num_mel_bins = model.config.num_mel_bins self.assertEqual( generated_speech.shape[1], num_mel_bins, "Generated speech output has an unexpected number of mel bins." ) # Validate generation with additional kwargs using model.generate; # same method than generate_speech set_seed(555) # Reset seed for consistent results generated_speech_with_generate = model.generate( input_ids, attention_mask=None, speaker_embeddings=speaker_embeddings ) self.assertEqual( generated_speech_with_generate.shape, generated_speech.shape, "Shape mismatch between generate_speech and generate methods.", ) @require_deterministic_for_xpu def test_one_to_many_generation(self): model = self.default_model processor = self.default_processor vocoder = self.default_vocoder input_text = [ "mister quilter is the apostle of the middle classes and we are glad to welcome his gospel", "nor is mister quilter's manner less interesting than his matter", "he tells us that at this festive season of the year with christmas and rosebeaf looming before us", ] inputs = processor(text=input_text, padding="max_length", max_length=128, return_tensors="pt").to(torch_device) speaker_embeddings = torch.zeros((1, 512), device=torch_device) # Generate spectrograms set_seed(555) # Ensure deterministic behavior spectrograms, spectrogram_lengths = model.generate_speech( input_ids=inputs["input_ids"], speaker_embeddings=speaker_embeddings, attention_mask=inputs["attention_mask"], return_output_lengths=True, ) # Validate generated spectrogram dimensions expected_batch_size = len(input_text) num_mel_bins = model.config.num_mel_bins actual_batch_size, _, actual_num_mel_bins = spectrograms.shape self.assertEqual(actual_batch_size, expected_batch_size, "Batch size of generated spectrograms is incorrect.") self.assertEqual( actual_num_mel_bins, num_mel_bins, "Number of mel bins in batch generated spectrograms is incorrect." ) # Generate waveforms using the vocoder waveforms = vocoder(spectrograms) waveform_lengths = [int(waveforms.size(1) / max(spectrogram_lengths)) * i for i in spectrogram_lengths] # Validate generation with integrated vocoder set_seed(555) # Reset seed for consistent results waveforms_with_vocoder, waveform_lengths_with_vocoder = model.generate_speech( input_ids=inputs["input_ids"], speaker_embeddings=speaker_embeddings, attention_mask=inputs["attention_mask"], vocoder=vocoder, return_output_lengths=True, ) # Check consistency between waveforms generated with and without standalone vocoder self.assertTrue( torch.allclose(waveforms, waveforms_with_vocoder, atol=1e-8), "Mismatch in waveforms generated with and without the standalone vocoder.", ) self.assertEqual( waveform_lengths, waveform_lengths_with_vocoder, "Waveform lengths differ between standalone and integrated vocoder generation.", ) # Test generation consistency without returning lengths set_seed(555) # Reset seed for consistent results waveforms_with_vocoder_no_lengths = model.generate_speech( input_ids=inputs["input_ids"], speaker_embeddings=speaker_embeddings, attention_mask=inputs["attention_mask"], vocoder=vocoder, return_output_lengths=False, ) # Validate waveform consistency without length information self.assertTrue( torch.allclose(waveforms_with_vocoder_no_lengths, waveforms_with_vocoder, atol=1e-8), "Waveforms differ when generated with and without length information.", ) # Validate batch vs. single instance generation consistency for i, text in enumerate(input_text): inputs = processor(text=text, padding="max_length", max_length=128, return_tensors="pt").to(torch_device) set_seed(555) # Reset seed for consistent results spectrogram = model.generate_speech( input_ids=inputs["input_ids"], speaker_embeddings=speaker_embeddings, ) # Check spectrogram shape consistency self.assertEqual( spectrogram.shape, spectrograms[i][: spectrogram_lengths[i]].shape, "Mismatch in spectrogram shape between batch and single instance generation.", ) # Generate and validate waveform for single instance waveform = vocoder(spectrogram) self.assertEqual( waveform.shape, waveforms[i][: waveform_lengths[i]].shape, "Mismatch in waveform shape between batch and single instance generation.", ) # Check waveform consistency with integrated vocoder set_seed(555) # Reset seed for consistent results waveform_with_integrated_vocoder = model.generate_speech( input_ids=inputs["input_ids"], speaker_embeddings=speaker_embeddings, vocoder=vocoder, ) self.assertTrue( torch.allclose(waveform, waveform_with_integrated_vocoder, atol=1e-8), "Mismatch in waveform between standalone and integrated vocoder for single instance generation.", ) @require_deterministic_for_xpu def test_batch_generation(self): model = self.default_model processor = self.default_processor vocoder = self.default_vocoder input_text = [ "mister quilter is the apostle of the middle classes and we are glad to welcome his gospel", "nor is mister quilter's manner less interesting than his matter", "he tells us that at this festive season of the year with christmas and rosebeaf looming before us", ] inputs = processor(text=input_text, padding="max_length", max_length=128, return_tensors="pt").to(torch_device) set_seed(555) # Ensure deterministic behavior speaker_embeddings = torch.randn((len(input_text), 512), device=torch_device) # Generate spectrograms set_seed(555) # Reset seed for consistent results spectrograms, spectrogram_lengths = model.generate_speech( input_ids=inputs["input_ids"], speaker_embeddings=speaker_embeddings, attention_mask=inputs["attention_mask"], return_output_lengths=True, ) # Validate generated spectrogram dimensions expected_batch_size = len(input_text) num_mel_bins = model.config.num_mel_bins actual_batch_size, _, actual_num_mel_bins = spectrograms.shape self.assertEqual( actual_batch_size, expected_batch_size, "Batch size of generated spectrograms is incorrect.", ) self.assertEqual( actual_num_mel_bins, num_mel_bins, "Number of mel bins in batch generated spectrograms is incorrect.", ) # Generate waveforms using the vocoder waveforms = vocoder(spectrograms) waveform_lengths = [int(waveforms.size(1) / max(spectrogram_lengths)) * i for i in spectrogram_lengths] # Validate generation with integrated vocoder set_seed(555) # Reset seed for consistent results waveforms_with_vocoder, waveform_lengths_with_vocoder = model.generate_speech( input_ids=inputs["input_ids"], speaker_embeddings=speaker_embeddings, attention_mask=inputs["attention_mask"], vocoder=vocoder, return_output_lengths=True, ) # Check consistency between waveforms generated with and without standalone vocoder self.assertTrue( torch.allclose(waveforms, waveforms_with_vocoder, atol=1e-8), "Mismatch in waveforms generated with and without the standalone vocoder.", ) self.assertEqual( waveform_lengths, waveform_lengths_with_vocoder, "Waveform lengths differ between standalone and integrated vocoder generation.", ) # Test generation consistency without returning lengths set_seed(555) # Reset seed for consistent results waveforms_with_vocoder_no_lengths = model.generate_speech( input_ids=inputs["input_ids"], speaker_embeddings=speaker_embeddings, attention_mask=inputs["attention_mask"], vocoder=vocoder, return_output_lengths=False, ) # Validate waveform consistency without length information self.assertTrue( torch.allclose(waveforms_with_vocoder_no_lengths, waveforms_with_vocoder, atol=1e-8), "Waveforms differ when generated with and without length information.", ) # Validate batch vs. single instance generation consistency for i, text in enumerate(input_text): inputs = processor(text=text, padding="max_length", max_length=128, return_tensors="pt").to(torch_device) current_speaker_embedding = speaker_embeddings[i].unsqueeze(0) set_seed(555) # Reset seed for consistent results spectrogram = model.generate_speech( input_ids=inputs["input_ids"], speaker_embeddings=current_speaker_embedding, ) # Check spectrogram shape consistency self.assertEqual( spectrogram.shape, spectrograms[i][: spectrogram_lengths[i]].shape, "Mismatch in spectrogram shape between batch and single instance generation.", ) # Generate and validate waveform for single instance waveform = vocoder(spectrogram) self.assertEqual( waveform.shape, waveforms[i][: waveform_lengths[i]].shape, "Mismatch in waveform shape between batch and single instance generation.", ) # Check waveform consistency with integrated vocoder set_seed(555) # Reset seed for consistent results waveform_with_integrated_vocoder = model.generate_speech( input_ids=inputs["input_ids"], speaker_embeddings=current_speaker_embedding, vocoder=vocoder, ) self.assertTrue( torch.allclose(waveform, waveform_with_integrated_vocoder, atol=1e-8), "Mismatch in waveform between standalone and integrated vocoder for single instance generation.", ) @require_torch class SpeechT5ForSpeechToSpeechTester: def __init__( self, parent, batch_size=13, encoder_seq_length=1024, # speech is longer decoder_seq_length=1024, is_training=False, hidden_size=24, num_hidden_layers=2, num_attention_heads=2, intermediate_size=4, conv_dim=(32, 32, 32), conv_stride=(4, 4, 4), conv_kernel=(8, 8, 8), conv_bias=False, num_conv_pos_embeddings=16, num_conv_pos_embedding_groups=2, vocab_size=81, num_mel_bins=20, reduction_factor=2, speech_decoder_postnet_layers=2, speech_decoder_postnet_units=32, speech_decoder_prenet_units=32, ): self.parent = parent self.batch_size = batch_size self.encoder_seq_length = encoder_seq_length self.decoder_seq_length = decoder_seq_length self.is_training = is_training self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.conv_dim = conv_dim self.conv_stride = conv_stride self.conv_kernel = conv_kernel self.conv_bias = conv_bias self.num_conv_pos_embeddings = num_conv_pos_embeddings self.num_conv_pos_embedding_groups = num_conv_pos_embedding_groups self.vocab_size = vocab_size self.num_mel_bins = num_mel_bins self.reduction_factor = reduction_factor self.speech_decoder_postnet_layers = speech_decoder_postnet_layers self.speech_decoder_postnet_units = speech_decoder_postnet_units self.speech_decoder_prenet_units = speech_decoder_prenet_units def prepare_config_and_inputs(self): input_values = floats_tensor([self.batch_size, self.encoder_seq_length], scale=1.0) attention_mask = random_attention_mask([self.batch_size, self.encoder_seq_length]) decoder_input_values = floats_tensor([self.batch_size, self.decoder_seq_length, self.num_mel_bins], scale=1.0) decoder_attention_mask = random_attention_mask([self.batch_size, self.decoder_seq_length]) config = self.get_config() inputs_dict = prepare_inputs_dict( config, input_values=input_values, decoder_input_values=decoder_input_values, attention_mask=attention_mask, decoder_attention_mask=decoder_attention_mask, ) return config, inputs_dict def prepare_config_and_inputs_for_common(self): config, inputs_dict = self.prepare_config_and_inputs() return config, inputs_dict def get_config(self): return SpeechT5Config( hidden_size=self.hidden_size, encoder_layers=self.num_hidden_layers, decoder_layers=self.num_hidden_layers, encoder_attention_heads=self.num_attention_heads, decoder_attention_heads=self.num_attention_heads, encoder_ffn_dim=self.intermediate_size, decoder_ffn_dim=self.intermediate_size, conv_dim=self.conv_dim, conv_stride=self.conv_stride, conv_kernel=self.conv_kernel, conv_bias=self.conv_bias, num_conv_pos_embeddings=self.num_conv_pos_embeddings, num_conv_pos_embedding_groups=self.num_conv_pos_embedding_groups, vocab_size=self.vocab_size, num_mel_bins=self.num_mel_bins, reduction_factor=self.reduction_factor, speech_decoder_postnet_layers=self.speech_decoder_postnet_layers, speech_decoder_postnet_units=self.speech_decoder_postnet_units, speech_decoder_prenet_units=self.speech_decoder_prenet_units, ) def create_and_check_model_forward(self, config, inputs_dict): model = SpeechT5ForSpeechToSpeech(config=config).to(torch_device).eval() input_values = inputs_dict["input_values"] attention_mask = inputs_dict["attention_mask"] decoder_input_values = inputs_dict["decoder_input_values"] result = model(input_values, attention_mask=attention_mask, decoder_input_values=decoder_input_values) self.parent.assertEqual( result.spectrogram.shape, (self.batch_size, self.decoder_seq_length * self.reduction_factor, self.num_mel_bins), ) @require_torch class SpeechT5ForSpeechToSpeechTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (SpeechT5ForSpeechToSpeech,) if is_torch_available() else () is_encoder_decoder = True test_pruning = False test_headmasking = False test_resize_embeddings = False def setUp(self): self.model_tester = SpeechT5ForSpeechToSpeechTester(self) self.config_tester = ConfigTester(self, config_class=SpeechT5Config, hidden_size=37) def test_config(self): self.config_tester.run_common_tests() def test_save_load_strict(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs() for model_class in self.all_model_classes: model = model_class(config) with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) model2, info = model_class.from_pretrained(tmpdirname, output_loading_info=True) self.assertEqual(info["missing_keys"], []) def test_model_forward(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model_forward(*config_and_inputs) def test_model_forward_with_labels(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs() model = SpeechT5ForSpeechToSpeech(config=config).to(torch_device).eval() input_values = inputs_dict["input_values"] attention_mask = inputs_dict["attention_mask"] decoder_attention_mask = inputs_dict["decoder_attention_mask"] labels = inputs_dict["decoder_input_values"] result = model( input_values, attention_mask=attention_mask, labels=labels, decoder_attention_mask=decoder_attention_mask ) self.assertEqual( result.spectrogram.shape, (self.model_tester.batch_size, self.model_tester.decoder_seq_length, self.model_tester.num_mel_bins), ) @unittest.skip(reason="There is always dropout in SpeechT5SpeechDecoderPrenet") def test_decoder_model_past_with_large_inputs(self): pass @unittest.skip(reason="There is always dropout in SpeechT5SpeechDecoderPrenet") def test_determinism(self): pass @unittest.skip(reason="skipped because there is always dropout in SpeechT5SpeechDecoderPrenet") def test_batching_equivalence(self): pass def test_attention_outputs(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True seq_len = getattr(self.model_tester, "seq_length", None) decoder_seq_length = getattr(self.model_tester, "decoder_seq_length", seq_len) encoder_seq_length = getattr(self.model_tester, "encoder_seq_length", seq_len) decoder_key_length = getattr(self.model_tester, "decoder_key_length", decoder_seq_length) encoder_key_length = getattr(self.model_tester, "key_length", encoder_seq_length) for model_class in self.all_model_classes: inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = False config.return_dict = True model = model_class._from_config(config, attn_implementation="eager") config = model.config model.to(torch_device) model.eval() subsampled_encoder_seq_length = model.speecht5.encoder.prenet._get_feat_extract_output_lengths( encoder_seq_length ) subsampled_encoder_key_length = model.speecht5.encoder.prenet._get_feat_extract_output_lengths( encoder_key_length ) with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.encoder_attentions if config.is_encoder_decoder else outputs.attentions self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) # check that output_attentions also work using config del inputs_dict["output_attentions"] config.output_attentions = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.encoder_attentions if config.is_encoder_decoder else outputs.attentions self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, subsampled_encoder_seq_length, subsampled_encoder_key_length], ) out_len = len(outputs) correct_outlen = 5 # loss is at first position if "labels" in inputs_dict: correct_outlen += 1 # loss is added to beginning if "past_key_values" in outputs: correct_outlen += 1 # past_key_values have been returned self.assertEqual(out_len, correct_outlen) # decoder attentions decoder_attentions = outputs.decoder_attentions self.assertIsInstance(decoder_attentions, (list, tuple)) self.assertEqual(len(decoder_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(decoder_attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, decoder_seq_length, decoder_key_length], ) # cross attentions cross_attentions = outputs.cross_attentions self.assertIsInstance(cross_attentions, (list, tuple)) self.assertEqual(len(cross_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(cross_attentions[0].shape[-3:]), [ self.model_tester.num_attention_heads, decoder_seq_length, subsampled_encoder_key_length, ], ) # Check attention is always last and order is fine inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) added_hidden_states = 2 self.assertEqual(out_len + added_hidden_states, len(outputs)) self_attentions = outputs.encoder_attentions if config.is_encoder_decoder else outputs.attentions self.assertEqual(len(self_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(self_attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, subsampled_encoder_seq_length, subsampled_encoder_key_length], ) def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] expected_arg_names = [ "input_values", "attention_mask", "decoder_input_values", "decoder_attention_mask", ] expected_arg_names.extend( ["head_mask", "decoder_head_mask", "cross_attn_head_mask", "encoder_outputs"] if "head_mask" and "decoder_head_mask" and "cross_attn_head_mask" in arg_names else ["encoder_outputs"] ) self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) def test_hidden_states_output(self): def check_hidden_states_output(inputs_dict, config, model_class): model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) hidden_states = outputs.encoder_hidden_states if config.is_encoder_decoder else outputs.hidden_states expected_num_layers = getattr( self.model_tester, "expected_num_hidden_layers", self.model_tester.num_hidden_layers + 1 ) self.assertEqual(len(hidden_states), expected_num_layers) if hasattr(self.model_tester, "encoder_seq_length"): seq_length = self.model_tester.encoder_seq_length else: seq_length = self.model_tester.seq_length subsampled_seq_length = model.speecht5.encoder.prenet._get_feat_extract_output_lengths(seq_length) self.assertListEqual( list(hidden_states[0].shape[-2:]), [subsampled_seq_length, self.model_tester.hidden_size], ) if config.is_encoder_decoder: hidden_states = outputs.decoder_hidden_states self.assertIsInstance(hidden_states, (list, tuple)) self.assertEqual(len(hidden_states), expected_num_layers) seq_len = getattr(self.model_tester, "seq_length", None) decoder_seq_length = getattr(self.model_tester, "decoder_seq_length", seq_len) self.assertListEqual( list(hidden_states[0].shape[-2:]), [decoder_seq_length, self.model_tester.hidden_size], ) config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: inputs_dict["output_hidden_states"] = True check_hidden_states_output(inputs_dict, config, model_class) # check that output_hidden_states also work using config del inputs_dict["output_hidden_states"] config.output_hidden_states = True check_hidden_states_output(inputs_dict, config, model_class) def test_initialization(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() configs_no_init = _config_zero_init(config) for model_class in self.all_model_classes: model = model_class(config=configs_no_init) for name, param in model.named_parameters(): uniform_init_parms = [ "conv.weight", "conv.parametrizations.weight", "masked_spec_embed", "feature_projection.projection.weight", "feature_projection.projection.bias", ] if param.requires_grad: if any(x in name for x in uniform_init_parms): self.assertTrue( -1.0 <= ((param.data.mean() * 1e9).round() / 1e9).item() <= 1.0, msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) else: self.assertIn( ((param.data.mean() * 1e9).round() / 1e9).item(), [0.0, 1.0], msg=f"Parameter {name} of model {model_class} seems not properly initialized", ) @unittest.skip(reason="Model has no input_embeds") def test_inputs_embeds(self): pass @unittest.skip(reason="Model has no input_embeds") def test_model_get_set_embeddings(self): pass @unittest.skip(reason="Dropout is always present in SpeechT5SpeechDecoderPrenet") def test_model_outputs_equivalence(self): pass @unittest.skip(reason="Decoder cannot keep gradients") def test_retain_grad_hidden_states_attentions(self): pass @unittest.skip(reason="Dropout is always present in SpeechT5SpeechDecoderPrenet") def test_save_load(self): pass @slow @unittest.skip(reason="Model doesn't have decoder_input_ids") def test_torchscript_output_attentions(self): pass @slow @unittest.skip(reason="Model doesn't have decoder_input_ids") def test_torchscript_output_hidden_state(self): pass @slow @unittest.skip(reason="Model doesn't have decoder_input_ids") def test_torchscript_simple(self): pass @unittest.skip(reason="Training is not supported yet") def test_training(self): pass @unittest.skip(reason="Training is not supported yet") def test_training_gradient_checkpointing(self): pass @unittest.skip( reason="This architecture seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant(self): pass @unittest.skip( reason="This architecture seem to not compute gradients properly when using GC, check: https://github.com/huggingface/transformers/pull/27124" ) def test_training_gradient_checkpointing_use_reentrant_false(self): pass # overwrite from test_modeling_common def _mock_init_weights(self, module): if hasattr(module, "weight") and module.weight is not None: module.weight.data.fill_(3) if hasattr(module, "weight_g") and module.weight_g is not None: module.weight_g.data.fill_(3) if hasattr(module, "weight_v") and module.weight_v is not None: module.weight_v.data.fill_(3) if hasattr(module, "bias") and module.bias is not None: module.bias.data.fill_(3) if hasattr(module, "masked_spec_embed") and module.masked_spec_embed is not None: module.masked_spec_embed.data.fill_(3) @require_torch @require_sentencepiece @require_tokenizers @slow class SpeechT5ForSpeechToSpeechIntegrationTests(unittest.TestCase): @cached_property def default_processor(self): return SpeechT5Processor.from_pretrained("microsoft/speecht5_vc") def _load_datasamples(self, num_samples): from datasets import load_dataset ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") # automatic decoding with librispeech speech_samples = ds.sort("id")[:num_samples]["audio"] return [x["array"] for x in speech_samples] def test_generation_librispeech(self): model = SpeechT5ForSpeechToSpeech.from_pretrained("microsoft/speecht5_vc") model.to(torch_device) processor = self.default_processor input_speech = self._load_datasamples(1) input_values = processor(audio=input_speech, return_tensors="pt").input_values.to(torch_device) speaker_embeddings = torch.zeros((1, 512), device=torch_device) generated_speech = model.generate_speech(input_values, speaker_embeddings=speaker_embeddings) self.assertEqual(generated_speech.shape[1], model.config.num_mel_bins) self.assertGreaterEqual(generated_speech.shape[0], 300) self.assertLessEqual(generated_speech.shape[0], 310) class SpeechT5HifiGanTester: def __init__( self, parent, batch_size=13, seq_length=7, is_training=False, num_mel_bins=20, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.num_mel_bins = num_mel_bins def prepare_config_and_inputs(self): input_values = floats_tensor([self.seq_length, self.num_mel_bins], scale=1.0) config = self.get_config() return config, input_values def get_config(self): return SpeechT5HifiGanConfig( model_in_dim=self.num_mel_bins, upsample_initial_channel=32, ) def create_and_check_model(self, config, input_values): model = SpeechT5HifiGan(config=config).to(torch_device).eval() result = model(input_values) self.parent.assertEqual(result.shape, (self.seq_length * 256,)) def prepare_config_and_inputs_for_common(self): config, input_values = self.prepare_config_and_inputs() inputs_dict = {"spectrogram": input_values} return config, inputs_dict @require_torch class SpeechT5HifiGanTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (SpeechT5HifiGan,) if is_torch_available() else () test_torchscript = False test_pruning = False test_resize_embeddings = False test_resize_position_embeddings = False test_head_masking = False test_mismatched_shapes = False test_missing_keys = False test_model_parallel = False is_encoder_decoder = False has_attentions = False def setUp(self): self.model_tester = SpeechT5HifiGanTester(self) self.config_tester = ConfigTester(self, config_class=SpeechT5HifiGanConfig) def test_config(self): self.config_tester.create_and_test_config_to_json_string() self.config_tester.create_and_test_config_to_json_file() self.config_tester.create_and_test_config_from_and_save_pretrained() self.config_tester.create_and_test_config_from_and_save_pretrained_subfolder() self.config_tester.create_and_test_config_with_num_labels() self.config_tester.check_config_can_be_init_without_params() self.config_tester.check_config_arguments_init() def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] expected_arg_names = [ "spectrogram", ] self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) @unittest.skip(reason="Model does not output hidden states") def test_hidden_states_output(self): pass @unittest.skip def test_initialization(self): pass @unittest.skip(reason="Model has no input_embeds") def test_inputs_embeds(self): pass @unittest.skip(reason="Model has no input_embeds") def test_model_get_set_embeddings(self): pass @unittest.skip(reason="Model does not support all arguments tested") def test_model_outputs_equivalence(self): pass @unittest.skip(reason="Model does not output hidden states") def test_retain_grad_hidden_states_attentions(self): pass def test_batched_inputs_outputs(self): config, inputs = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() batched_inputs = inputs["spectrogram"].unsqueeze(0).repeat(2, 1, 1) with torch.no_grad(): batched_outputs = model(batched_inputs.to(torch_device)) self.assertEqual( batched_inputs.shape[0], batched_outputs.shape[0], msg="Got different batch dims for input and output" ) def test_unbatched_inputs_outputs(self): config, inputs = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(inputs["spectrogram"].to(torch_device)) self.assertTrue(outputs.dim() == 1, msg="Got un-batched inputs but batched output")
transformers/tests/models/speecht5/test_modeling_speecht5.py/0
{ "file_path": "transformers/tests/models/speecht5/test_modeling_speecht5.py", "repo_id": "transformers", "token_count": 36550 }
602
# Copyright 2022 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch TimeSformer model.""" import copy import unittest import numpy as np from huggingface_hub import hf_hub_download from transformers import TimesformerConfig from transformers.models.auto import get_values from transformers.testing_utils import Expectations, require_torch, require_vision, slow, torch_device from transformers.utils import cached_property, is_torch_available, is_vision_available from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, floats_tensor, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from torch import nn from transformers import ( MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING, TimesformerForVideoClassification, TimesformerModel, ) if is_vision_available(): from transformers import VideoMAEImageProcessor class TimesformerModelTester: def __init__( self, parent, batch_size=13, image_size=10, num_channels=3, patch_size=2, num_frames=2, is_training=True, use_labels=True, hidden_size=32, num_hidden_layers=2, num_attention_heads=4, intermediate_size=37, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, num_labels=10, initializer_range=0.02, attention_type="divided_space_time", scope=None, ): self.parent = parent self.batch_size = batch_size self.image_size = image_size self.num_channels = num_channels self.patch_size = patch_size self.num_frames = num_frames self.is_training = is_training self.use_labels = use_labels self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.attention_type = attention_type self.initializer_range = initializer_range self.scope = scope self.num_labels = num_labels # in TimeSformer, the number of spatial tokens equals num_frames * num_patches per frame + 1 CLS token self.num_patches_per_frame = (image_size // patch_size) ** 2 self.seq_length = (num_frames) * self.num_patches_per_frame + 1 def prepare_config_and_inputs(self): pixel_values = floats_tensor( [self.batch_size, self.num_frames, self.num_channels, self.image_size, self.image_size] ) labels = None if self.use_labels: labels = ids_tensor([self.batch_size], self.num_labels) config = self.get_config() return config, pixel_values, labels def get_config(self): config = TimesformerConfig( image_size=self.image_size, patch_size=self.patch_size, num_channels=self.num_channels, num_frames=self.num_frames, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, intermediate_size=self.intermediate_size, hidden_act=self.hidden_act, hidden_dropout_prob=self.hidden_dropout_prob, attention_probs_dropout_prob=self.attention_probs_dropout_prob, initializer_range=self.initializer_range, attention_type=self.attention_type, ) config.num_labels = self.num_labels return config def create_and_check_model(self, config, pixel_values, labels): model = TimesformerModel(config=config) model.to(torch_device) model.eval() result = model(pixel_values) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) def create_and_check_for_video_classification(self, config, pixel_values, labels): model = TimesformerForVideoClassification(config) model.to(torch_device) model.eval() result = model(pixel_values) # verify the logits shape expected_shape = torch.Size((self.batch_size, self.num_labels)) self.parent.assertEqual(result.logits.shape, expected_shape) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, pixel_values, labels = config_and_inputs inputs_dict = {"pixel_values": pixel_values} return config, inputs_dict @require_torch class TimesformerModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): """ Here we also overwrite some of the tests of test_modeling_common.py, as TimeSformer does not use input_ids, inputs_embeds, attention_mask and seq_length. """ all_model_classes = (TimesformerModel, TimesformerForVideoClassification) if is_torch_available() else () pipeline_model_mapping = ( {"feature-extraction": TimesformerModel, "video-classification": TimesformerForVideoClassification} if is_torch_available() else {} ) test_pruning = False test_torchscript = False test_resize_embeddings = False test_head_masking = False test_torch_exportable = True def setUp(self): self.model_tester = TimesformerModelTester(self) self.config_tester = ConfigTester( self, config_class=TimesformerConfig, has_text_modality=False, hidden_size=37 ) def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): inputs_dict = copy.deepcopy(inputs_dict) if return_labels: if model_class in get_values(MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING): inputs_dict["labels"] = torch.zeros( self.model_tester.batch_size, dtype=torch.long, device=torch_device ) return inputs_dict def test_config(self): self.config_tester.run_common_tests() @unittest.skip(reason="TimeSformer does not use inputs_embeds") def test_inputs_embeds(self): pass def test_model_get_set_embeddings(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) self.assertIsInstance(model.get_input_embeddings(), (nn.Module)) x = model.get_output_embeddings() self.assertTrue(x is None or isinstance(x, nn.Linear)) def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_for_video_classification(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_video_classification(*config_and_inputs) @slow def test_model_from_pretrained(self): model_name = "facebook/timesformer-base-finetuned-k400" model = TimesformerModel.from_pretrained(model_name) self.assertIsNotNone(model) def test_attention_outputs(self): if not self.has_attentions: self.skipTest(reason="Model has no attentions") else: config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True for model_class in self.all_model_classes: seq_len = self.model_tester.seq_length num_frames = self.model_tester.num_frames inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = False config.return_dict = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.attentions self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) # check that output_attentions also work using config del inputs_dict["output_attentions"] config.output_attentions = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.attentions self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) # attentions has shape (batch_size x num_frames) x num_heads x (num_patches per frame + 1) x (num_patches per frame + 1) self.assertListEqual( list(attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, seq_len // num_frames + 1, seq_len // num_frames + 1], ) out_len = len(outputs) # Check attention is always last and order is fine inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) self.assertEqual(out_len + 1, len(outputs)) self_attentions = outputs.attentions self.assertEqual(len(self_attentions), self.model_tester.num_hidden_layers) # attentions has shape (batch_size x num_frames) x num_heads x (num_patches per frame + 1) x (num_patches per frame + 1) self.assertListEqual( list(self_attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, seq_len // num_frames + 1, seq_len // num_frames + 1], ) def test_hidden_states_output(self): def check_hidden_states_output(inputs_dict, config, model_class): model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) hidden_states = outputs.hidden_states expected_num_layers = self.model_tester.num_hidden_layers + 1 self.assertEqual(len(hidden_states), expected_num_layers) seq_length = self.model_tester.seq_length self.assertListEqual( list(hidden_states[0].shape[-2:]), [seq_length, self.model_tester.hidden_size], ) config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: inputs_dict["output_hidden_states"] = True check_hidden_states_output(inputs_dict, config, model_class) # check that output_hidden_states also work using config del inputs_dict["output_hidden_states"] config.output_hidden_states = True check_hidden_states_output(inputs_dict, config, model_class) # We will verify our results on a video of eating spaghetti # Frame indices used: [164 168 172 176 181 185 189 193 198 202 206 210 215 219 223 227] def prepare_video(): file = hf_hub_download( repo_id="hf-internal-testing/spaghetti-video", filename="eating_spaghetti.npy", repo_type="dataset" ) video = np.load(file) return list(video) @require_torch @require_vision class TimesformerModelIntegrationTest(unittest.TestCase): @cached_property def default_image_processor(self): # logits were tested with a different mean and std, so we use the same here return ( VideoMAEImageProcessor(image_mean=[0.5, 0.5, 0.5], image_std=[0.5, 0.5, 0.5]) if is_vision_available() else None ) @slow def test_inference_for_video_classification(self): model = TimesformerForVideoClassification.from_pretrained("facebook/timesformer-base-finetuned-k400").to( torch_device ) image_processor = self.default_image_processor video = prepare_video() inputs = image_processor(video[:8], return_tensors="pt").to(torch_device) # forward pass with torch.no_grad(): outputs = model(**inputs) # verify the logits expected_shape = torch.Size((1, 400)) self.assertEqual(outputs.logits.shape, expected_shape) expectations = Expectations( { (None, None): [-0.3016, -0.7713, -0.4205], ("cuda", 8): [-0.3016, -0.7713, -0.4205], } ) expected_slice = torch.tensor(expectations.get_expectation()).to(torch_device) torch.testing.assert_close(outputs.logits[0, :3], expected_slice, rtol=2e-4, atol=2e-4)
transformers/tests/models/timesformer/test_modeling_timesformer.py/0
{ "file_path": "transformers/tests/models/timesformer/test_modeling_timesformer.py", "repo_id": "transformers", "token_count": 6095 }
603
# Copyright 2022 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch VideoMAE model.""" import copy import tempfile import unittest import numpy as np from huggingface_hub import hf_hub_download from pytest import mark from transformers import VideoMAEConfig from transformers.models.auto import get_values from transformers.testing_utils import ( Expectations, is_flaky, require_flash_attn, require_torch, require_torch_gpu, require_vision, slow, torch_device, ) from transformers.utils import cached_property, check_torch_load_is_safe, is_torch_available, is_vision_available from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, floats_tensor, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from torch import nn from transformers import ( MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING, VideoMAEForPreTraining, VideoMAEForVideoClassification, VideoMAEModel, ) if is_vision_available(): from transformers import VideoMAEImageProcessor class VideoMAEModelTester: def __init__( self, parent, batch_size=13, image_size=10, num_channels=3, patch_size=2, tubelet_size=2, num_frames=2, is_training=True, use_labels=True, hidden_size=32, num_hidden_layers=2, num_attention_heads=4, intermediate_size=37, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, type_sequence_label_size=10, initializer_range=0.02, mask_ratio=0.9, scope=None, attn_implementation="eager", ): self.parent = parent self.batch_size = batch_size self.image_size = image_size self.num_channels = num_channels self.patch_size = patch_size self.tubelet_size = tubelet_size self.num_frames = num_frames self.is_training = is_training self.use_labels = use_labels self.hidden_size = hidden_size self.num_hidden_layers = num_hidden_layers self.num_attention_heads = num_attention_heads self.intermediate_size = intermediate_size self.hidden_act = hidden_act self.hidden_dropout_prob = hidden_dropout_prob self.attention_probs_dropout_prob = attention_probs_dropout_prob self.type_sequence_label_size = type_sequence_label_size self.initializer_range = initializer_range self.mask_ratio = mask_ratio self.scope = scope self.attn_implementation = attn_implementation # in VideoMAE, the number of tokens equals num_frames/tubelet_size * num_patches per frame self.num_patches_per_frame = (image_size // patch_size) ** 2 self.seq_length = (num_frames // tubelet_size) * self.num_patches_per_frame # use this variable to define bool_masked_pos self.num_masks = int(mask_ratio * self.seq_length) def prepare_config_and_inputs(self): pixel_values = floats_tensor( [self.batch_size, self.num_frames, self.num_channels, self.image_size, self.image_size] ) labels = None if self.use_labels: labels = ids_tensor([self.batch_size], self.type_sequence_label_size) config = self.get_config() return config, pixel_values, labels def get_config(self): return VideoMAEConfig( image_size=self.image_size, patch_size=self.patch_size, num_channels=self.num_channels, num_frames=self.num_frames, tubelet_size=self.tubelet_size, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, intermediate_size=self.intermediate_size, hidden_act=self.hidden_act, hidden_dropout_prob=self.hidden_dropout_prob, attention_probs_dropout_prob=self.attention_probs_dropout_prob, is_decoder=False, initializer_range=self.initializer_range, decoder_hidden_size=self.hidden_size, decoder_intermediate_size=self.intermediate_size, decoder_num_attention_heads=self.num_attention_heads, decoder_num_hidden_layers=self.num_hidden_layers, attn_implementation=self.attn_implementation, ) def create_and_check_model(self, config, pixel_values, labels): model = VideoMAEModel(config=config) model.to(torch_device) model.eval() result = model(pixel_values) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) def create_and_check_for_pretraining(self, config, pixel_values, labels): model = VideoMAEForPreTraining(config) model.to(torch_device) model.eval() # important: each video needs to have the same number of masked patches # hence we define a single mask, which we then repeat for each example in the batch mask = torch.ones((self.num_masks,)) mask = torch.cat([mask, torch.zeros(self.seq_length - mask.size(0))]) bool_masked_pos = mask.expand(self.batch_size, -1).bool() result = model(pixel_values, bool_masked_pos) # model only returns predictions for masked patches num_masked_patches = mask.sum().item() decoder_num_labels = 3 * self.tubelet_size * self.patch_size**2 self.parent.assertEqual(result.logits.shape, (self.batch_size, num_masked_patches, decoder_num_labels)) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() config, pixel_values, labels = config_and_inputs inputs_dict = {"pixel_values": pixel_values} return config, inputs_dict @require_torch class VideoMAEModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase): """ Here we also overwrite some of the tests of test_modeling_common.py, as VideoMAE does not use input_ids, inputs_embeds, attention_mask and seq_length. """ all_model_classes = ( (VideoMAEModel, VideoMAEForPreTraining, VideoMAEForVideoClassification) if is_torch_available() else () ) pipeline_model_mapping = ( {"feature-extraction": VideoMAEModel, "video-classification": VideoMAEForVideoClassification} if is_torch_available() else {} ) # Addition keys that are required for forward, used in tests where we manipulate and create new input dict from scratch additional_model_inputs = ["bool_masked_pos"] test_pruning = False test_torchscript = False test_resize_embeddings = False test_head_masking = False test_torch_exportable = True def setUp(self): self.model_tester = VideoMAEModelTester(self) self.config_tester = ConfigTester(self, config_class=VideoMAEConfig, has_text_modality=False, hidden_size=37) def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): inputs_dict = copy.deepcopy(inputs_dict) if model_class == VideoMAEForPreTraining: # important: each video needs to have the same number of masked patches # hence we define a single mask, which we then repeat for each example in the batch mask = torch.ones((self.model_tester.num_masks,)) mask = torch.cat([mask, torch.zeros(self.model_tester.seq_length - mask.size(0))]) batch_size = inputs_dict["pixel_values"].shape[0] bool_masked_pos = mask.expand(batch_size, -1).bool() inputs_dict["bool_masked_pos"] = bool_masked_pos.to(torch_device) if return_labels: if model_class in [ *get_values(MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING), ]: inputs_dict["labels"] = torch.zeros( self.model_tester.batch_size, dtype=torch.long, device=torch_device ) return inputs_dict def test_config(self): self.config_tester.run_common_tests() @unittest.skip(reason="VideoMAE does not use inputs_embeds") def test_inputs_embeds(self): pass def test_model_get_set_embeddings(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) self.assertIsInstance(model.get_input_embeddings(), (nn.Module)) x = model.get_output_embeddings() self.assertTrue(x is None or isinstance(x, nn.Linear)) def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_for_pretraining(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_pretraining(*config_and_inputs) @slow def test_model_from_pretrained(self): model_name = "MCG-NJU/videomae-base" model = VideoMAEModel.from_pretrained(model_name) self.assertIsNotNone(model) def test_attention_outputs(self): if not self.has_attentions: self.skipTest(reason="Model does not have attentions") else: config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True for model_class in self.all_model_classes: num_visible_patches = self.model_tester.seq_length - self.model_tester.num_masks seq_len = ( num_visible_patches if model_class == VideoMAEForPreTraining else self.model_tester.seq_length ) inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = False config.return_dict = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.attentions self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) # check that output_attentions also work using config del inputs_dict["output_attentions"] config.output_attentions = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) attentions = outputs.attentions self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, seq_len, seq_len], ) out_len = len(outputs) # Check attention is always last and order is fine inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = True model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) self.assertEqual(out_len + 1, len(outputs)) self_attentions = outputs.attentions self.assertEqual(len(self_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(self_attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, seq_len, seq_len], ) def test_hidden_states_output(self): def check_hidden_states_output(inputs_dict, config, model_class): model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): outputs = model(**self._prepare_for_class(inputs_dict, model_class)) hidden_states = outputs.hidden_states expected_num_layers = self.model_tester.num_hidden_layers + 1 self.assertEqual(len(hidden_states), expected_num_layers) num_visible_patches = self.model_tester.seq_length - self.model_tester.num_masks seq_length = num_visible_patches if model_class == VideoMAEForPreTraining else self.model_tester.seq_length self.assertListEqual( list(hidden_states[0].shape[-2:]), [seq_length, self.model_tester.hidden_size], ) config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: inputs_dict["output_hidden_states"] = True check_hidden_states_output(inputs_dict, config, model_class) # check that output_hidden_states also work using config del inputs_dict["output_hidden_states"] config.output_hidden_states = True check_hidden_states_output(inputs_dict, config, model_class) @require_flash_attn @require_torch_gpu @mark.flash_attn_test @slow @is_flaky() def test_flash_attn_2_inference_equivalence(self): if not self.has_attentions: self.skipTest(reason="Model architecture does not support attentions") for model_class in self.all_model_classes: if not model_class._supports_flash_attn: self.skipTest(f"{model_class.__name__} does not support Flash Attention 2") config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() inputs_dict = self._prepare_for_class(inputs_dict, model_class) inputs_dict["pixel_values"] = inputs_dict["pixel_values"].to(torch.bfloat16) model = model_class(config) with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) model_fa = model_class.from_pretrained( tmpdirname, dtype=torch.bfloat16, attn_implementation="flash_attention_2" ) model_fa.to(torch_device) model = model_class.from_pretrained(tmpdirname, dtype=torch.bfloat16) model.to(torch_device) outputs = model(**inputs_dict, output_hidden_states=True) outputs_fa = model_fa(**inputs_dict, output_hidden_states=True) logits = ( outputs.hidden_states[-1] if not model.config.is_encoder_decoder else outputs.decoder_hidden_states[-1] ) logits_fa = ( outputs_fa.hidden_states[-1] if not model.config.is_encoder_decoder else outputs_fa.decoder_hidden_states[-1] ) assert torch.allclose(logits_fa, logits, atol=4e-2, rtol=4e-2) # check with inference + dropout model.train() _ = model_fa(**inputs_dict) @unittest.skip("Not applicable for VideoMAE") def test_flash_attn_2_inference_equivalence_right_padding(self): pass # We will verify our results on a video of eating spaghetti # Frame indices used: [164 168 172 176 181 185 189 193 198 202 206 210 215 219 223 227] def prepare_video(): file = hf_hub_download( repo_id="hf-internal-testing/spaghetti-video", filename="eating_spaghetti.npy", repo_type="dataset" ) video = np.load(file) return list(video) @require_torch @require_vision class VideoMAEModelIntegrationTest(unittest.TestCase): @cached_property def default_image_processor(self): # logits were tested with a different mean and std, so we use the same here return ( VideoMAEImageProcessor(image_mean=[0.5, 0.5, 0.5], image_std=[0.5, 0.5, 0.5]) if is_vision_available() else None ) @slow def test_inference_for_video_classification(self): model = VideoMAEForVideoClassification.from_pretrained("MCG-NJU/videomae-base-finetuned-kinetics").to( torch_device ) image_processor = self.default_image_processor video = prepare_video() inputs = image_processor(video, return_tensors="pt").to(torch_device) # forward pass with torch.no_grad(): outputs = model(**inputs) # verify the logits expected_shape = torch.Size((1, 400)) self.assertEqual(outputs.logits.shape, expected_shape) expectations = Expectations( { (None, None): [0.3669, -0.0688, -0.2421], ("cuda", 8): [0.3668, -0.0690, -0.2421], } ) expected_slice = torch.tensor(expectations.get_expectation()).to(torch_device) torch.testing.assert_close(outputs.logits[0, :3], expected_slice, rtol=2e-4, atol=2e-4) @slow def test_inference_for_pretraining(self): model = VideoMAEForPreTraining.from_pretrained("MCG-NJU/videomae-base-short").to(torch_device) image_processor = self.default_image_processor video = prepare_video() inputs = image_processor(video, return_tensors="pt").to(torch_device) # add boolean mask, indicating which patches to mask local_path = hf_hub_download(repo_id="hf-internal-testing/bool-masked-pos", filename="bool_masked_pos.pt") check_torch_load_is_safe() inputs["bool_masked_pos"] = torch.load(local_path, weights_only=True) # forward pass with torch.no_grad(): outputs = model(**inputs) # verify the logits expected_shape = torch.Size([1, 1408, 1536]) expected_slice = torch.tensor( [[0.7994, 0.9612, 0.8508], [0.7401, 0.8958, 0.8302], [0.5862, 0.7468, 0.7325]], device=torch_device ) self.assertEqual(outputs.logits.shape, expected_shape) torch.testing.assert_close(outputs.logits[0, :3, :3], expected_slice, rtol=1e-4, atol=1e-4) # verify the loss (`config.norm_pix_loss` = `True`) expected_loss = torch.tensor([0.5142], device=torch_device) torch.testing.assert_close(outputs.loss, expected_loss, rtol=1e-4, atol=1e-4) # verify the loss (`config.norm_pix_loss` = `False`) model = VideoMAEForPreTraining.from_pretrained("MCG-NJU/videomae-base-short", norm_pix_loss=False).to( torch_device ) with torch.no_grad(): outputs = model(**inputs) expected_loss = torch.tensor(torch.tensor([0.6469]), device=torch_device) torch.testing.assert_close(outputs.loss, expected_loss, rtol=1e-4, atol=1e-4)
transformers/tests/models/videomae/test_modeling_videomae.py/0
{ "file_path": "transformers/tests/models/videomae/test_modeling_videomae.py", "repo_id": "transformers", "token_count": 8886 }
604
# Copyright 2025 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Testing suite for the PyTorch Xcodec model.""" import inspect import json import math import os import tempfile import unittest from pathlib import Path import numpy as np from datasets import Audio, load_dataset from parameterized import parameterized from pytest import mark from tests.test_configuration_common import ConfigTester from tests.test_modeling_common import ModelTesterMixin, _config_zero_init, floats_tensor, ids_tensor from transformers import AutoFeatureExtractor, XcodecConfig from transformers.testing_utils import ( is_flaky, is_torch_available, require_flash_attn, require_torch, require_torch_gpu, slow, torch_device, ) if is_torch_available(): import torch from transformers import XcodecModel @require_torch class XcodecModelTester: def __init__( self, parent, batch_size=4, num_channels=1, sample_rate=16000, codebook_size=1024, num_samples=400, is_training=False, ): self.parent = parent self.batch_size = batch_size self.num_channels = num_channels self.sample_rate = sample_rate self.codebook_size = codebook_size self.is_training = is_training self.num_samples = num_samples def prepare_config_and_inputs(self): config = self.get_config() inputs_dict = { "input_values": floats_tensor([self.batch_size, self.num_channels, self.num_samples], scale=1.0) } return config, inputs_dict def prepare_config_and_inputs_for_common(self): config, inputs_dict = self.prepare_config_and_inputs() return config, inputs_dict def prepare_config_and_inputs_for_model_class(self, model_class): config, inputs_dict = self.prepare_config_and_inputs() codes_length = math.ceil(self.num_samples / config.hop_length) inputs_dict["audio_codes"] = ids_tensor( [self.batch_size, config.num_quantizers, codes_length], config.codebook_size ) return config, inputs_dict def get_config(self): return XcodecConfig( sample_rate=self.sample_rate, audio_channels=self.num_channels, codebook_size=self.codebook_size, ) def create_and_check_model_forward(self, config, inputs_dict): model = XcodecModel(config=config).to(torch_device).eval() result = model(input_values=inputs_dict["input_values"]) self.parent.assertEqual(result.audio_values.shape, (self.batch_size, self.num_channels, self.num_samples)) @require_torch class XcodecModelTest(ModelTesterMixin, unittest.TestCase): all_model_classes = (XcodecModel,) if is_torch_available() else () is_encoder_decoder = True test_pruning = False test_headmasking = False test_resize_embeddings = False test_torchscript = False test_can_init_all_missing_weights = False def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): # model does not support returning hidden states inputs_dict = super()._prepare_for_class(inputs_dict, model_class, return_labels=return_labels) if "output_attentions" in inputs_dict: inputs_dict.pop("output_attentions") if "output_hidden_states" in inputs_dict: inputs_dict.pop("output_hidden_states") return inputs_dict def setUp(self): self.model_tester = XcodecModelTester(self) self.config_tester = ConfigTester( self, config_class=XcodecConfig, common_properties=[], has_text_modality=False ) def test_config(self): self.config_tester.run_common_tests() def test_model_forward(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model_forward(*config_and_inputs) def test_forward_signature(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) signature = inspect.signature(model.forward) # signature.parameters is an OrderedDict => so arg_names order is deterministic arg_names = [*signature.parameters.keys()] expected_arg_names = ["input_values", "audio_codes", "bandwidth", "return_dict"] self.assertListEqual(arg_names[: len(expected_arg_names)], expected_arg_names) def test_gradient_checkpointing_backward_compatibility(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: if not model_class.supports_gradient_checkpointing: continue config.text_encoder.gradient_checkpointing = True config.audio_encoder.gradient_checkpointing = True config.decoder.gradient_checkpointing = True model = model_class(config) self.assertTrue(model.is_gradient_checkpointing) @unittest.skip("XcodecModel cannot be tested with meta device") def test_can_load_with_meta_device_context_manager(self): pass @unittest.skip(reason="We cannot configure to output a smaller model.") def test_model_is_small(self): pass @unittest.skip(reason="The XcodecModel does not have `inputs_embeds` logics") def test_inputs_embeds(self): pass @unittest.skip(reason="The XcodecModel does not have `inputs_embeds` logics") def test_model_get_set_embeddings(self): pass @unittest.skip(reason="The XcodecModel does not have the usual `attention` logic") def test_retain_grad_hidden_states_attentions(self): pass @unittest.skip(reason="The XcodecModel does not have the usual `attention` logic") def test_torchscript_output_attentions(self): pass @unittest.skip(reason="The XcodecModel does not have the usual `hidden_states` logic") def test_torchscript_output_hidden_state(self): pass # Copied from transformers.tests.encodec.test_modeling_encodec.XcodecModelTest._create_and_check_torchscript def _create_and_check_torchscript(self, config, inputs_dict): if not self.test_torchscript: self.skipTest(reason="test_torchscript is set to False") configs_no_init = _config_zero_init(config) # To be sure we have no Nan configs_no_init.torchscript = True configs_no_init.return_dict = False for model_class in self.all_model_classes: model = model_class(config=configs_no_init) model.to(torch_device) model.eval() inputs = self._prepare_for_class(inputs_dict, model_class) main_input_name = model_class.main_input_name try: main_input = inputs[main_input_name] model(main_input) traced_model = torch.jit.trace(model, main_input) except RuntimeError: self.fail("Couldn't trace module.") with tempfile.TemporaryDirectory() as tmp_dir_name: pt_file_name = os.path.join(tmp_dir_name, "traced_model.pt") try: torch.jit.save(traced_model, pt_file_name) except Exception: self.fail("Couldn't save module.") try: loaded_model = torch.jit.load(pt_file_name) except Exception: self.fail("Couldn't load module.") model.to(torch_device) model.eval() loaded_model.to(torch_device) loaded_model.eval() model_state_dict = model.state_dict() loaded_model_state_dict = loaded_model.state_dict() non_persistent_buffers = {} for key in loaded_model_state_dict.keys(): if key not in model_state_dict.keys(): non_persistent_buffers[key] = loaded_model_state_dict[key] loaded_model_state_dict = { key: value for key, value in loaded_model_state_dict.items() if key not in non_persistent_buffers } self.assertEqual(set(model_state_dict.keys()), set(loaded_model_state_dict.keys())) model_buffers = list(model.buffers()) for non_persistent_buffer in non_persistent_buffers.values(): found_buffer = False for i, model_buffer in enumerate(model_buffers): if torch.equal(non_persistent_buffer, model_buffer): found_buffer = True break self.assertTrue(found_buffer) model_buffers.pop(i) model_buffers = list(model.buffers()) for non_persistent_buffer in non_persistent_buffers.values(): found_buffer = False for i, model_buffer in enumerate(model_buffers): if torch.equal(non_persistent_buffer, model_buffer): found_buffer = True break self.assertTrue(found_buffer) model_buffers.pop(i) models_equal = True for layer_name, p1 in model_state_dict.items(): if layer_name in loaded_model_state_dict: p2 = loaded_model_state_dict[layer_name] if p1.data.ne(p2.data).sum() > 0: models_equal = False self.assertTrue(models_equal) # Avoid memory leak. Without this, each call increase RAM usage by ~20MB. # (Even with this call, there are still memory leak by ~0.04MB) self.clear_torch_jit_class_registry() @unittest.skip(reason="The XcodecModel does not have the usual `attention` logic") def test_attention_outputs(self): pass @unittest.skip(reason="The XcodecModel does not have the usual `hidden_states` logic") def test_hidden_states_output(self): pass # Copied from transformers.tests.encodec.test_modeling_encodecEncodecModelTest.test_determinism def test_determinism(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() def check_determinism(first, second): # outputs are not tensors but list (since each sequence don't have the same frame_length) out_1 = first.cpu().numpy() out_2 = second.cpu().numpy() out_1 = out_1[~np.isnan(out_1)] out_2 = out_2[~np.isnan(out_2)] max_diff = np.amax(np.abs(out_1 - out_2)) self.assertLessEqual(max_diff, 1e-5) for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): first = model(**self._prepare_for_class(inputs_dict, model_class))[0] second = model(**self._prepare_for_class(inputs_dict, model_class))[0] if isinstance(first, tuple) and isinstance(second, tuple): for tensor1, tensor2 in zip(first, second): check_determinism(tensor1, tensor2) else: check_determinism(first, second) # Copied from transformers.tests.encodec.test_modeling_encodecEncodecModelTest.test_model_outputs_equivalence def test_model_outputs_equivalence(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() def set_nan_tensor_to_zero(t): t[t != t] = 0 return t def check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs={}): with torch.no_grad(): tuple_output = model(**tuple_inputs, return_dict=False, **additional_kwargs) dict_output = model(**dict_inputs, return_dict=True, **additional_kwargs) self.assertTrue(isinstance(tuple_output, tuple)) self.assertTrue(isinstance(dict_output, dict)) for tuple_value, dict_value in zip(tuple_output, dict_output.values()): self.assertTrue( torch.allclose( set_nan_tensor_to_zero(tuple_value), set_nan_tensor_to_zero(dict_value), atol=1e-5 ), msg=( "Tuple and dict output are not equal. Difference:" f" {torch.max(torch.abs(tuple_value - dict_value))}. Tuple has `nan`:" f" {torch.isnan(tuple_value).any()} and `inf`: {torch.isinf(tuple_value)}. Dict has" f" `nan`: {torch.isnan(dict_value).any()} and `inf`: {torch.isinf(dict_value)}." ), ) for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() tuple_inputs = self._prepare_for_class(inputs_dict, model_class) dict_inputs = self._prepare_for_class(inputs_dict, model_class) check_equivalence(model, tuple_inputs, dict_inputs) def test_initialization(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() configs_no_init = _config_zero_init(config) for model_class in self.all_model_classes: model = model_class(config=configs_no_init) for name, param in model.named_parameters(): # skipping the parametrizations original0 tensor if name == "semantic_model.encoder.pos_conv_embed.conv.parametrizations.weight.original0": continue uniform_init_parms = ["conv"] if param.requires_grad: if any(x in name for x in uniform_init_parms): self.assertTrue( -1.0 <= ((param.data.mean() * 1e9).round() / 1e9).item() <= 1.0, msg=f"Parameter {name} of {model_class.__name__} seems not properly initialized", ) @require_flash_attn @require_torch_gpu @mark.flash_attn_test @slow @is_flaky() def test_flash_attn_2_inference_equivalence(self): for model_class in self.all_model_classes: config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() model = model_class(config) with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname) model_fa = model_class.from_pretrained( tmpdirname, dtype=torch.bfloat16, attn_implementation="flash_attention_2" ) model_fa.to(torch_device) model = model_class.from_pretrained(tmpdirname, dtype=torch.bfloat16) model.to(torch_device) dummy_input = inputs_dict[model.main_input_name][:1] if dummy_input.dtype in [torch.float32, torch.float16]: dummy_input = dummy_input.to(torch.bfloat16) outputs = model(dummy_input) outputs_fa = model_fa(dummy_input) logits = outputs[1] logits_fa = outputs_fa[1] assert torch.allclose(logits_fa, logits, atol=4e-2, rtol=4e-2) @unittest.skip(reason="The XcodecModel does not support right padding") def test_flash_attn_2_inference_equivalence_right_padding(self): pass @unittest.skip(reason="The XcodecModel does not have support dynamic compile yet") def test_sdpa_can_compile_dynamic(self): pass # Copied from transformers.tests.encodec.test_modeling_encodec.normalize def normalize(arr): norm = np.linalg.norm(arr) normalized_arr = arr / norm return normalized_arr # Copied from transformers.tests.encodec.test_modeling_encodec.compute_rmse def compute_rmse(arr1, arr2): arr1_np = arr1.cpu().numpy().squeeze() arr2_np = arr2.cpu().numpy().squeeze() max_length = min(arr1.shape[-1], arr2.shape[-1]) arr1_np = arr1_np[..., :max_length] arr2_np = arr2_np[..., :max_length] arr1_normalized = normalize(arr1_np) arr2_normalized = normalize(arr2_np) return np.sqrt(((arr1_normalized - arr2_normalized) ** 2).mean()) """ Integration tests for XCodec Code for reproducing expected outputs can be found here: https://gist.github.com/ebezzam/cdaf8c223e59e7677b2ea6bc2dc8230b One reason for higher tolerances is because of different implementation of `Snake1d` within Transformer version DAC See here: https://github.com/huggingface/transformers/pull/39793#issue-3277407384 """ RESULTS_PATH = Path(__file__).parent.parent.parent / "fixtures/xcodec/integration_tests.json" with open(RESULTS_PATH, "r") as f: raw_data = json.load(f) # convert dicts into tuples ordered to match test args EXPECTED_OUTPUTS_JSON = [ ( f"{d['repo_id']}_{d['bandwidth']}", d["repo_id"], d["bandwidth"], d["codes"], d["decoded"], d["codec_error"], d["codec_tol"], d["dec_tol"], ) for d in raw_data ] @slow @require_torch class XcodecIntegrationTest(unittest.TestCase): @parameterized.expand(EXPECTED_OUTPUTS_JSON) def test_integration( self, test_name, repo_id, bandwidth, exp_codes, exp_decoded, exp_codec_err, codec_tol, dec_tol ): # load model model = XcodecModel.from_pretrained(repo_id).to(torch_device).eval() feature_extractor = AutoFeatureExtractor.from_pretrained(repo_id) # load audio example librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") librispeech_dummy = librispeech_dummy.cast_column( "audio", Audio(sampling_rate=feature_extractor.sampling_rate) ) audio_array = librispeech_dummy[0]["audio"]["array"] inputs = feature_extractor( raw_audio=audio_array, sampling_rate=feature_extractor.sampling_rate, return_tensors="pt" ).to(torch_device) x = inputs["input_values"] with torch.no_grad(): ENC_TOL = 0 audio_codes = model.encode(x, bandwidth=bandwidth, return_dict=False) if exp_codes is not None: exp_codes = torch.tensor(exp_codes).to(torch_device) torch.testing.assert_close( audio_codes[..., : exp_codes.shape[-1]], exp_codes, rtol=ENC_TOL, atol=ENC_TOL, ) # dec_tol = 1e-5 # increased to 1e-4 for passing on 4 kbps input_values_dec = model.decode(audio_codes).audio_values if exp_decoded is not None: exp_decoded = torch.tensor(exp_decoded).to(torch_device) torch.testing.assert_close( input_values_dec[..., : exp_decoded.shape[-1]], exp_decoded, rtol=dec_tol, atol=dec_tol, ) # compute codec error codec_err = compute_rmse(input_values_dec, x) torch.testing.assert_close(codec_err, exp_codec_err, rtol=codec_tol, atol=codec_tol) # make sure forward and decode gives same result audio_values_enc_dec = model(x, bandwidth=bandwidth).audio_values torch.testing.assert_close(input_values_dec, audio_values_enc_dec, rtol=1e-6, atol=1e-6)
transformers/tests/models/xcodec/test_modeling_xcodec.py/0
{ "file_path": "transformers/tests/models/xcodec/test_modeling_xcodec.py", "repo_id": "transformers", "token_count": 9339 }
605
# Copyright 2025 NXAI GmbH. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from parameterized import parameterized from transformers import AutoTokenizer, is_torch_available, xLSTMConfig from transformers.testing_utils import require_read_token, require_torch, require_torch_gpu, slow, torch_device from ...generation.test_utils import GenerationTesterMixin from ...test_configuration_common import ConfigTester from ...test_modeling_common import ModelTesterMixin, ids_tensor from ...test_pipeline_mixin import PipelineTesterMixin if is_torch_available(): import torch from transformers import ( xLSTMForCausalLM, xLSTMModel, ) from transformers.models.xlstm.modeling_xlstm import xLSTMBlock, xLSTMCache class xLSTMModelTester: def __init__( self, parent, batch_size=13, num_heads=2, seq_length=7, is_training=True, use_labels=True, vocab_size=99, hidden_size=128, qk_dim_factor=0.5, v_dim_factor=1.0, num_hidden_layers=2, max_position_embeddings=512, type_vocab_size=16, type_sequence_label_size=2, num_labels=3, num_choices=4, scope=None, chunkwise_kernel="chunkwise--native_autograd", sequence_kernel="native_sequence__native", step_kernel="native", tie_word_embeddings=False, ): self.parent = parent self.num_heads = num_heads self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_labels = use_labels self.vocab_size = vocab_size self.num_hidden_layers = num_hidden_layers self.hidden_size = hidden_size self.qk_dim_factor = qk_dim_factor self.v_dim_factor = v_dim_factor self.max_position_embeddings = max_position_embeddings self.type_vocab_size = type_vocab_size self.type_sequence_label_size = type_sequence_label_size self.num_labels = num_labels self.num_choices = num_choices self.scope = scope self.bos_token_id = vocab_size - 1 self.eos_token_id = vocab_size - 1 self.pad_token_id = vocab_size - 1 self.chunkwise_kernel = chunkwise_kernel self.sequence_kernel = sequence_kernel self.step_kernel = step_kernel self.tie_word_embeddings = tie_word_embeddings def get_large_model_config(self): cfg = xLSTMConfig.from_pretrained("NX-AI/xLSTM-7b") return cfg def prepare_config_and_inputs(self, scale_attn_by_inverse_layer_idx=False, reorder_and_upcast_attn=False): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) sequence_labels = None token_labels = None choice_labels = None if self.use_labels: sequence_labels = ids_tensor([self.batch_size], self.type_sequence_label_size) token_labels = ids_tensor([self.batch_size, self.seq_length], self.num_labels) choice_labels = ids_tensor([self.batch_size], self.num_choices) config = self.get_config() return ( config, input_ids, None, sequence_labels, token_labels, choice_labels, ) def get_config(self): cfg = xLSTMConfig( num_heads=self.num_heads, vocab_size=self.vocab_size, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, qk_dim_factor=self.qk_dim_factor, v_dim_factor=self.v_dim_factor, n_positions=self.max_position_embeddings, type_vocab_size=self.type_vocab_size, use_cache=True, bos_token_id=self.bos_token_id, eos_token_id=self.eos_token_id, pad_token_id=self.pad_token_id, chunkwise_kernel=self.chunkwise_kernel, sequence_kernel=self.sequence_kernel, step_kernel=self.step_kernel, tie_word_embeddings=self.tie_word_embeddings, ) # this is needed for compatibility with generic tests # cfg.hidden_size = cfg.embedding_dim # cfg.num_hidden_layers = cfg.num_blocks return cfg def prepare_config_and_inputs_for_common(self): ( config, input_ids, _, sequence_labels, token_labels, choice_labels, ) = self.prepare_config_and_inputs() inputs_dict = {"input_ids": input_ids} return config, inputs_dict @require_torch class xLSTMModelTest(ModelTesterMixin, GenerationTesterMixin, PipelineTesterMixin, unittest.TestCase): all_model_classes = (xLSTMModel, xLSTMForCausalLM) if is_torch_available() else () all_generative_model_classes = (xLSTMForCausalLM,) if is_torch_available() else () has_attentions = False # xLSTM does not support attentions fx_compatible = False test_torchscript = False test_model_parallel = False test_pruning = False test_head_masking = False # xLSTM does not have attention heads pipeline_model_mapping = ( {"feature-extraction": xLSTMModel, "text-generation": xLSTMForCausalLM} if is_torch_available() else {} ) def setUp(self): self.model_tester = xLSTMModelTester(self) self.config_tester = ConfigTester( self, config_class=xLSTMConfig, n_embd=37, common_properties=["hidden_size", "num_hidden_layers"] ) def test_initialization(self): config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config=config) for name, param in model.named_parameters(): if "D" in name: if param.requires_grad: # check if it's a ones like self.assertTrue(torch.allclose(param.data, torch.ones_like(param.data), atol=1e-5, rtol=1e-5)) @unittest.skip(reason="xLSTM cache slicing test case is an edge case") def test_generate_without_input_ids(self): pass @unittest.skip(reason="xLSTM cache slicing test case is an edge case") @parameterized.expand([("greedy", 1), ("beam search", 2)]) def test_generate_from_inputs_embeds(self, _, num_beams): pass @unittest.skip(reason="xLSTM cache slicing test case is an edge case") def test_greedy_generate_dict_outputs_use_cache(self): pass @unittest.skip(reason="xLSTM cache slicing is interacting with beam search") def test_beam_search_generate_dict_outputs_use_cache(self): pass @unittest.skip(reason="xLSTM cache is not iterable") def test_multi_gpu_data_parallel_forward(self): pass def test_model_outputs_equivalence(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() def check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs={}): with torch.no_grad(): tuple_output = model(**tuple_inputs, return_dict=False, **additional_kwargs) dict_output = model(**dict_inputs, return_dict=True, **additional_kwargs).to_tuple() def recursive_check(tuple_object, dict_object): if isinstance(tuple_object, xLSTMCache): recursive_check(tuple_object.rnn_state, dict_object.rnn_state) elif isinstance(tuple_object, (list, tuple)): for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object): recursive_check(tuple_iterable_value, dict_iterable_value) elif isinstance(tuple_object, dict): for tuple_iterable_value, dict_iterable_value in zip( tuple_object.values(), dict_object.values() ): recursive_check(tuple_iterable_value, dict_iterable_value) elif tuple_object is None: return else: self.assertTrue( torch.allclose(tuple_object, dict_object, atol=1e-5), msg=( "Tuple and dict output are not equal. Difference:" f" {torch.max(torch.abs(tuple_object - dict_object))}. Tuple has `nan`:" f" {torch.isnan(tuple_object).any()} and `inf`: {torch.isinf(tuple_object)}. Dict has" f" `nan`: {torch.isnan(dict_object).any()} and `inf`: {torch.isinf(dict_object)}." ), ) recursive_check(tuple_output, dict_output) for model_class in self.all_model_classes: model = model_class(config) model.to(torch_device) model.eval() tuple_inputs = self._prepare_for_class(inputs_dict, model_class) dict_inputs = self._prepare_for_class(inputs_dict, model_class) check_equivalence(model, tuple_inputs, dict_inputs) tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) check_equivalence(model, tuple_inputs, dict_inputs) tuple_inputs = self._prepare_for_class(inputs_dict, model_class) dict_inputs = self._prepare_for_class(inputs_dict, model_class) check_equivalence(model, tuple_inputs, dict_inputs, {"output_hidden_states": True}) tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) check_equivalence(model, tuple_inputs, dict_inputs, {"output_hidden_states": True}) @require_torch @slow @require_read_token @unittest.skip("Model is fully broken currently") class xLSTMIntegrationTest(unittest.TestCase): def setUp(self): self.model_id = "NX-AI/xLSTM-7b" self.tokenizer = AutoTokenizer.from_pretrained(self.model_id, legacy=False) self.prompt = ("[INST]Write a hello world program in C++.",) def test_simple_generate(self): """ Simple generate test to avoid regressions. Note: state-spaces (cuda) implementation and pure torch implementation have irreconciliable differences as of now, which will cause this test to fail in an environment with state-spaces installed. """ tokenizer = self.tokenizer tokenizer.pad_token_id = tokenizer.eos_token_id model = xLSTMForCausalLM.from_pretrained(self.model_id, dtype=torch.bfloat16, device_map=torch_device) input_ids = tokenizer("[INST]Write a hello world program in C++.[/INST]", return_tensors="pt")["input_ids"].to( torch_device ) out = model.generate(input_ids, do_sample=False, use_cache=True, max_new_tokens=30) output_sentence = tokenizer.decode(out[0]) ground_truth_sentence = """<s>[INST]Write a hello world program in C++.[/INST] Sure, here is a simple "Hello, World!" program in C++:\n\n```cpp\n#include <iostream>\n\n""" self.assertEqual(output_sentence, ground_truth_sentence) def test_batched_equivalence_with_cache(self): """ Verifies that batched generation matches individual generation. Important because of the specific caching mechanism + statefulness of the xLSTM model. Depending on precision and devices, differences can be observed from generation to generation. """ tokenizer = self.tokenizer prompt = [ "[INST]Write C#.[/INST]", "[INST]Write a hello world in C++.[/INST]", "[INST] Write a simple Fibonacci number computation function in Rust that does memoization, with comments, in safe Rust.[/INST]", ] model = xLSTMForCausalLM.from_pretrained(self.model_id, dtype=torch.bfloat16, device_map=torch_device) tokenizer.pad_token_id = tokenizer.eos_token_id # batched generation tokenized_prompts = tokenizer(prompt, return_tensors="pt", padding="longest").to(torch_device) batched_gen = model.generate(**tokenized_prompts, max_new_tokens=30, use_cache=True) batched_output = tokenizer.batch_decode(batched_gen, skip_special_tokens=True) # individual generation for index_gen, individual_prompt in enumerate(prompt): inputs = tokenizer(individual_prompt, return_tensors="pt", padding="longest").to(torch_device) individual_gen = model.generate(**inputs, max_new_tokens=30, use_cache=True) individual_output = tokenizer.batch_decode(individual_gen, skip_special_tokens=True)[0] self.assertEqual(individual_output[:100], batched_output[index_gen][:100]) def test_batched_equivalence_without_cache(self): """ Verifies that batched generation matches individual generation without cache. Important because of the specific caching mechanism + statefulness of the xLSTM model. Depending on precision and devices, differences can be observed from generation to generation. """ tokenizer = self.tokenizer prompt = [ "[INST]Write C#.[/INST]", "[INST]Write a hello world in C++.[/INST]", "[INST] Write a simple Fibonacci number computation function in Rust that does memoization, with comments, in safe Rust.[/INST]", ] model = xLSTMForCausalLM.from_pretrained(self.model_id, dtype=torch.bfloat16, device_map=torch_device) tokenizer.pad_token_id = tokenizer.eos_token_id # batched generation tokenized_prompts = tokenizer(prompt, return_tensors="pt", padding="longest").to(torch_device) batched_gen = model.generate(**tokenized_prompts, max_new_tokens=30, use_cache=True) batched_output = tokenizer.batch_decode(batched_gen, skip_special_tokens=True) # individual generation for index_gen, individual_prompt in enumerate(prompt): inputs = tokenizer(individual_prompt, return_tensors="pt", padding="longest").to(torch_device) individual_gen = model.generate(**inputs, max_new_tokens=30, use_cache=True) individual_output = tokenizer.batch_decode(individual_gen, skip_special_tokens=True)[0] self.assertEqual(individual_output[:100], batched_output[index_gen][:100]) @require_torch_gpu def test_xlstm_block_train_vs_eval_equivalence(self): # Based on https://github.com/sustcsonglin/flash-linear-attention/issues/63 # Credit to zhixuan-lin B, T, D = 4, 512, 768 dtype = torch.bfloat16 config = xLSTMConfig(num_heads=24, head_dim=64, hidden_size=768, expand=2, n_groups=1) torch.manual_seed(42) with torch.amp.autocast(device_type="cuda", dtype=dtype): with torch.no_grad(): block = xLSTMBlock(config.to_xlstm_block_config()).to("cuda") hidden_states = torch.rand(size=(B, T, D), dtype=dtype, device="cuda") block.train() out_train = block(hidden_states) block.eval() out_eval = block(hidden_states) self.assertTrue(torch.allclose(out_train, out_eval, atol=1e-3))
transformers/tests/models/xlstm/test_modeling_xlstm.py/0
{ "file_path": "transformers/tests/models/xlstm/test_modeling_xlstm.py", "repo_id": "transformers", "token_count": 7272 }
606
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import tempfile import unittest from transformers import is_torch_available from transformers.testing_utils import require_torch if is_torch_available(): import torch from torch import nn from transformers import ( Adafactor, get_constant_schedule, get_constant_schedule_with_warmup, get_cosine_schedule_with_warmup, get_cosine_with_hard_restarts_schedule_with_warmup, get_inverse_sqrt_schedule, get_linear_schedule_with_warmup, get_polynomial_decay_schedule_with_warmup, get_scheduler, get_wsd_schedule, ) def unwrap_schedule(scheduler, num_steps=10): lrs = [] for _ in range(num_steps): lrs.append(scheduler.get_lr()[0]) scheduler.step() return lrs def unwrap_and_save_reload_schedule(scheduler, num_steps=10): lrs = [] for step in range(num_steps): lrs.append(scheduler.get_lr()[0]) scheduler.step() if step == num_steps // 2: with tempfile.TemporaryDirectory() as tmpdirname: file_name = os.path.join(tmpdirname, "schedule.bin") torch.save(scheduler.state_dict(), file_name) state_dict = torch.load(file_name, weights_only=False) scheduler.load_state_dict(state_dict) return lrs @require_torch class OptimizationTest(unittest.TestCase): def assertListAlmostEqual(self, list1, list2, tol): self.assertEqual(len(list1), len(list2)) for a, b in zip(list1, list2): self.assertAlmostEqual(a, b, delta=tol) def test_adam_w(self): w = torch.tensor([0.1, -0.2, -0.1], requires_grad=True) target = torch.tensor([0.4, 0.2, -0.5]) criterion = nn.MSELoss() # No warmup, constant schedule, no gradient clipping optimizer = torch.optim.AdamW(params=[w], lr=2e-1, weight_decay=0.0) for _ in range(100): loss = criterion(w, target) loss.backward() optimizer.step() w.grad.detach_() # No zero_grad() function on simple tensors. we do it ourselves. w.grad.zero_() self.assertListAlmostEqual(w.tolist(), [0.4, 0.2, -0.5], tol=1e-2) def test_adafactor(self): w = torch.tensor([0.1, -0.2, -0.1], requires_grad=True) target = torch.tensor([0.4, 0.2, -0.5]) criterion = nn.MSELoss() # No warmup, constant schedule, no gradient clipping optimizer = Adafactor( params=[w], lr=1e-2, eps=(1e-30, 1e-3), clip_threshold=1.0, decay_rate=-0.8, beta1=None, weight_decay=0.0, relative_step=False, scale_parameter=False, warmup_init=False, ) for _ in range(1000): loss = criterion(w, target) loss.backward() optimizer.step() w.grad.detach_() # No zero_grad() function on simple tensors. we do it ourselves. w.grad.zero_() self.assertListAlmostEqual(w.tolist(), [0.4, 0.2, -0.5], tol=1e-2) @require_torch class ScheduleInitTest(unittest.TestCase): m = nn.Linear(50, 50) if is_torch_available() else None optimizer = torch.optim.AdamW(m.parameters(), lr=10.0) if is_torch_available() else None num_steps = 10 def assertListAlmostEqual(self, list1, list2, tol, msg=None): self.assertEqual(len(list1), len(list2)) for a, b in zip(list1, list2): self.assertAlmostEqual(a, b, delta=tol, msg=msg) def test_schedulers(self): common_kwargs = {"num_warmup_steps": 2, "num_training_steps": 10} # schedulers doct format # function: (sched_args_dict, expected_learning_rates) scheds = { get_constant_schedule: ({}, [10.0] * self.num_steps), get_constant_schedule_with_warmup: ( {"num_warmup_steps": 4}, [0.0, 2.5, 5.0, 7.5, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0], ), get_linear_schedule_with_warmup: ( {**common_kwargs}, [0.0, 5.0, 10.0, 8.75, 7.5, 6.25, 5.0, 3.75, 2.5, 1.25], ), get_cosine_schedule_with_warmup: ( {**common_kwargs}, [0.0, 5.0, 10.0, 9.61, 8.53, 6.91, 5.0, 3.08, 1.46, 0.38], ), get_cosine_with_hard_restarts_schedule_with_warmup: ( {**common_kwargs, "num_cycles": 2}, [0.0, 5.0, 10.0, 8.53, 5.0, 1.46, 10.0, 8.53, 5.0, 1.46], ), get_polynomial_decay_schedule_with_warmup: ( {**common_kwargs, "power": 2.0, "lr_end": 1e-7}, [0.0, 5.0, 10.0, 7.656, 5.625, 3.906, 2.5, 1.406, 0.625, 0.156], ), get_inverse_sqrt_schedule: ( {"num_warmup_steps": 2}, [0.0, 5.0, 10.0, 8.165, 7.071, 6.325, 5.774, 5.345, 5.0, 4.714], ), get_wsd_schedule: ( {**common_kwargs, "num_decay_steps": 2, "min_lr_ratio": 0.0}, [0.0, 5.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 5.0], ), } for scheduler_func, data in scheds.items(): kwargs, expected_learning_rates = data scheduler = scheduler_func(self.optimizer, **kwargs) self.assertEqual(len([scheduler.get_lr()[0]]), 1) lrs_1 = unwrap_schedule(scheduler, self.num_steps) self.assertListAlmostEqual( lrs_1, expected_learning_rates, tol=1e-2, msg=f"failed for {scheduler_func} in normal scheduler", ) scheduler = scheduler_func(self.optimizer, **kwargs) if scheduler_func.__name__ != "get_constant_schedule": LambdaScheduleWrapper.wrap_scheduler(scheduler) # wrap to test picklability of the schedule lrs_2 = unwrap_and_save_reload_schedule(scheduler, self.num_steps) self.assertListEqual(lrs_1, lrs_2, msg=f"failed for {scheduler_func} in save and reload") def test_get_scheduler(self): test_params = [ { "name": "warmup_stable_decay", "optimizer": self.optimizer, "num_warmup_steps": 2, "num_training_steps": 10, "scheduler_specific_kwargs": { "num_decay_steps": 2, "warmup_type": "linear", "decay_type": "linear", }, }, { "name": "warmup_stable_decay", "optimizer": self.optimizer, "num_warmup_steps": 2, "num_training_steps": 10, "scheduler_specific_kwargs": { "num_decay_steps": 2, "warmup_type": "cosine", "decay_type": "cosine", }, }, { "name": "warmup_stable_decay", "optimizer": self.optimizer, "num_warmup_steps": 2, "num_training_steps": 10, "scheduler_specific_kwargs": { "num_decay_steps": 2, "warmup_type": "1-sqrt", "decay_type": "1-sqrt", }, }, {"name": "cosine", "optimizer": self.optimizer, "num_warmup_steps": 2, "num_training_steps": 10}, ] for param in test_params: self.assertTrue(get_scheduler(**param), msg=f"failed for {param['name']} in get_scheduler") class LambdaScheduleWrapper: """See https://github.com/huggingface/transformers/issues/21689""" def __init__(self, fn): self.fn = fn def __call__(self, *args, **kwargs): return self.fn(*args, **kwargs) @classmethod def wrap_scheduler(cls, scheduler): scheduler.lr_lambdas = list(map(cls, scheduler.lr_lambdas))
transformers/tests/optimization/test_optimization.py/0
{ "file_path": "transformers/tests/optimization/test_optimization.py", "repo_id": "transformers", "token_count": 4494 }
607
# Copyright 2023 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest import numpy as np from huggingface_hub.utils import insecure_hashlib from transformers import ( MODEL_FOR_MASK_GENERATION_MAPPING, is_tf_available, is_torch_available, is_vision_available, pipeline, ) from transformers.pipelines import MaskGenerationPipeline from transformers.testing_utils import ( Expectations, is_pipeline_test, nested_simplify, require_torch, require_vision, slow, ) if is_tf_available(): from transformers import TF_MODEL_FOR_MASK_GENERATION_MAPPING else: TF_MODEL_FOR_MASK_GENERATION_MAPPING = None if is_torch_available(): from transformers import MODEL_FOR_MASK_GENERATION_MAPPING else: MODEL_FOR_MASK_GENERATION_MAPPING = None if is_vision_available(): from PIL import Image else: class Image: @staticmethod def open(*args, **kwargs): pass def hashimage(image: Image) -> str: m = insecure_hashlib.md5(image.tobytes()) return m.hexdigest()[:10] def mask_to_test_readable(mask: Image) -> dict: npimg = np.array(mask) shape = npimg.shape return {"hash": hashimage(mask), "shape": shape} @is_pipeline_test @require_vision @require_torch class MaskGenerationPipelineTests(unittest.TestCase): model_mapping = dict(list(MODEL_FOR_MASK_GENERATION_MAPPING.items()) if MODEL_FOR_MASK_GENERATION_MAPPING else []) tf_model_mapping = dict( list(TF_MODEL_FOR_MASK_GENERATION_MAPPING.items()) if TF_MODEL_FOR_MASK_GENERATION_MAPPING else [] ) def get_test_pipeline( self, model, tokenizer=None, image_processor=None, feature_extractor=None, processor=None, dtype="float32", ): image_segmenter = MaskGenerationPipeline( model=model, tokenizer=tokenizer, feature_extractor=feature_extractor, image_processor=image_processor, processor=processor, dtype=dtype, ) return image_segmenter, [ "./tests/fixtures/tests_samples/COCO/000000039769.png", "./tests/fixtures/tests_samples/COCO/000000039769.png", ] @unittest.skip(reason="TODO @Arthur: Implement me") def run_pipeline_test(self, mask_generator, examples): pass @slow @require_torch def test_small_model_pt(self): image_segmenter = pipeline("mask-generation", model="facebook/sam-vit-huge") outputs = image_segmenter("http://images.cocodataset.org/val2017/000000039769.jpg", points_per_batch=256) # Shortening by hashing new_outupt = [] for i, o in enumerate(outputs["masks"]): new_outupt += [{"mask": mask_to_test_readable(o), "scores": outputs["scores"][i]}] # fmt: off last_output = Expectations({ ("cuda", None): {'mask': {'hash': 'b5f47c9191', 'shape': (480, 640)}, 'scores': 0.8871}, ("rocm", (9, 5)): {'mask': {'hash': 'b5f47c9191', 'shape': (480, 640)}, 'scores': 0.8872} }).get_expectation() self.assertEqual( nested_simplify(new_outupt, decimals=4), [ {'mask': {'hash': '115ad19f5f', 'shape': (480, 640)}, 'scores': 1.0444}, {'mask': {'hash': '6affa964c6', 'shape': (480, 640)}, 'scores': 1.021}, {'mask': {'hash': 'dfe28a0388', 'shape': (480, 640)}, 'scores': 1.0167}, {'mask': {'hash': 'c0a5f4a318', 'shape': (480, 640)}, 'scores': 1.0132}, {'mask': {'hash': 'fe8065c197', 'shape': (480, 640)}, 'scores': 1.0053}, {'mask': {'hash': 'e2d0b7a0b7', 'shape': (480, 640)}, 'scores': 0.9967}, {'mask': {'hash': '453c7844bd', 'shape': (480, 640)}, 'scores': 0.993}, {'mask': {'hash': '3d44f2926d', 'shape': (480, 640)}, 'scores': 0.9909}, {'mask': {'hash': '64033ddc3f', 'shape': (480, 640)}, 'scores': 0.9879}, {'mask': {'hash': '801064ff79', 'shape': (480, 640)}, 'scores': 0.9834}, {'mask': {'hash': '6172f276ef', 'shape': (480, 640)}, 'scores': 0.9716}, {'mask': {'hash': 'b49e60e084', 'shape': (480, 640)}, 'scores': 0.9612}, {'mask': {'hash': 'a811e775fd', 'shape': (480, 640)}, 'scores': 0.9599}, {'mask': {'hash': 'a6a8ebcf4b', 'shape': (480, 640)}, 'scores': 0.9552}, {'mask': {'hash': '9d8257e080', 'shape': (480, 640)}, 'scores': 0.9532}, {'mask': {'hash': '32de6454a8', 'shape': (480, 640)}, 'scores': 0.9516}, {'mask': {'hash': 'af3d4af2c8', 'shape': (480, 640)}, 'scores': 0.9499}, {'mask': {'hash': '3c6db475fb', 'shape': (480, 640)}, 'scores': 0.9483}, {'mask': {'hash': 'c290813fb9', 'shape': (480, 640)}, 'scores': 0.9464}, {'mask': {'hash': 'b6f0b8f606', 'shape': (480, 640)}, 'scores': 0.943}, {'mask': {'hash': '92ce16bfdf', 'shape': (480, 640)}, 'scores': 0.943}, {'mask': {'hash': 'c749b25868', 'shape': (480, 640)}, 'scores': 0.9408}, {'mask': {'hash': 'efb6cab859', 'shape': (480, 640)}, 'scores': 0.9335}, {'mask': {'hash': '1ff2eafb30', 'shape': (480, 640)}, 'scores': 0.9326}, {'mask': {'hash': '788b798e24', 'shape': (480, 640)}, 'scores': 0.9262}, {'mask': {'hash': 'abea804f0e', 'shape': (480, 640)}, 'scores': 0.8999}, {'mask': {'hash': '7b9e8ddb73', 'shape': (480, 640)}, 'scores': 0.8986}, {'mask': {'hash': 'cd24047c8a', 'shape': (480, 640)}, 'scores': 0.8984}, {'mask': {'hash': '6943e6bcbd', 'shape': (480, 640)}, 'scores': 0.8873}, last_output ], ) # fmt: on @require_torch @slow def test_threshold(self): model_id = "facebook/sam-vit-huge" image_segmenter = pipeline("mask-generation", model=model_id) outputs = image_segmenter( "http://images.cocodataset.org/val2017/000000039769.jpg", pred_iou_thresh=1, points_per_batch=256 ) # Shortening by hashing new_outupt = [] for i, o in enumerate(outputs["masks"]): new_outupt += [{"mask": mask_to_test_readable(o), "scores": outputs["scores"][i]}] self.assertEqual( nested_simplify(new_outupt, decimals=4), [ {"mask": {"hash": "115ad19f5f", "shape": (480, 640)}, "scores": 1.0444}, {"mask": {"hash": "6affa964c6", "shape": (480, 640)}, "scores": 1.0210}, {"mask": {"hash": "dfe28a0388", "shape": (480, 640)}, "scores": 1.0167}, {"mask": {"hash": "c0a5f4a318", "shape": (480, 640)}, "scores": 1.0132}, {"mask": {"hash": "fe8065c197", "shape": (480, 640)}, "scores": 1.0053}, ], )
transformers/tests/pipelines/test_pipelines_mask_generation.py/0
{ "file_path": "transformers/tests/pipelines/test_pipelines_mask_generation.py", "repo_id": "transformers", "token_count": 3662 }
608
# Copyright 2021 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from transformers import ( MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING, ZeroShotObjectDetectionPipeline, is_vision_available, pipeline, ) from transformers.testing_utils import ( is_pipeline_test, nested_simplify, require_torch, require_vision, slow, ) from .test_pipelines_common import ANY if is_vision_available(): from PIL import Image else: class Image: @staticmethod def open(*args, **kwargs): pass @is_pipeline_test @require_vision @require_torch class ZeroShotObjectDetectionPipelineTests(unittest.TestCase): model_mapping = MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING def get_test_pipeline( self, model, tokenizer=None, image_processor=None, feature_extractor=None, processor=None, dtype="float32", ): object_detector = ZeroShotObjectDetectionPipeline( model=model, processor=processor, tokenizer=tokenizer, image_processor=image_processor, dtype=dtype, ) examples = [ { "image": "./tests/fixtures/tests_samples/COCO/000000039769.png", "candidate_labels": ["cat", "remote", "couch"], } ] return object_detector, examples def run_pipeline_test(self, object_detector, examples): outputs = object_detector(examples[0].get("image"), examples[0].get("candidate_labels"), threshold=0.0) n = len(outputs) self.assertGreater(n, 0) self.assertEqual( outputs, [ { "score": ANY(float), "label": ANY(str), "box": {"xmin": ANY(int), "ymin": ANY(int), "xmax": ANY(int), "ymax": ANY(int)}, } for i in range(n) ], ) @require_torch def test_small_model_pt(self): object_detector = pipeline( "zero-shot-object-detection", model="hf-internal-testing/tiny-random-owlvit-object-detection" ) outputs = object_detector( "./tests/fixtures/tests_samples/COCO/000000039769.png", candidate_labels=["cat", "remote", "couch"], threshold=0.64, ) self.assertEqual( nested_simplify(outputs, decimals=4), [ {"score": 0.7235, "label": "cat", "box": {"xmin": 204, "ymin": 167, "xmax": 232, "ymax": 190}}, {"score": 0.7218, "label": "remote", "box": {"xmin": 204, "ymin": 167, "xmax": 232, "ymax": 190}}, {"score": 0.7184, "label": "couch", "box": {"xmin": 204, "ymin": 167, "xmax": 232, "ymax": 190}}, {"score": 0.6748, "label": "remote", "box": {"xmin": 571, "ymin": 83, "xmax": 598, "ymax": 103}}, {"score": 0.6656, "label": "cat", "box": {"xmin": 571, "ymin": 83, "xmax": 598, "ymax": 103}}, {"score": 0.6614, "label": "couch", "box": {"xmin": 571, "ymin": 83, "xmax": 598, "ymax": 103}}, {"score": 0.6456, "label": "remote", "box": {"xmin": 494, "ymin": 105, "xmax": 521, "ymax": 127}}, {"score": 0.642, "label": "remote", "box": {"xmin": 67, "ymin": 274, "xmax": 93, "ymax": 297}}, {"score": 0.6419, "label": "cat", "box": {"xmin": 494, "ymin": 105, "xmax": 521, "ymax": 127}}, ], ) outputs = object_detector( [ { "image": "./tests/fixtures/tests_samples/COCO/000000039769.png", "candidate_labels": ["cat", "remote", "couch"], } ], threshold=0.64, ) self.assertEqual( nested_simplify(outputs, decimals=4), [ [ {"score": 0.7235, "label": "cat", "box": {"xmin": 204, "ymin": 167, "xmax": 232, "ymax": 190}}, {"score": 0.7218, "label": "remote", "box": {"xmin": 204, "ymin": 167, "xmax": 232, "ymax": 190}}, {"score": 0.7184, "label": "couch", "box": {"xmin": 204, "ymin": 167, "xmax": 232, "ymax": 190}}, {"score": 0.6748, "label": "remote", "box": {"xmin": 571, "ymin": 83, "xmax": 598, "ymax": 103}}, {"score": 0.6656, "label": "cat", "box": {"xmin": 571, "ymin": 83, "xmax": 598, "ymax": 103}}, {"score": 0.6614, "label": "couch", "box": {"xmin": 571, "ymin": 83, "xmax": 598, "ymax": 103}}, {"score": 0.6456, "label": "remote", "box": {"xmin": 494, "ymin": 105, "xmax": 521, "ymax": 127}}, {"score": 0.642, "label": "remote", "box": {"xmin": 67, "ymin": 274, "xmax": 93, "ymax": 297}}, {"score": 0.6419, "label": "cat", "box": {"xmin": 494, "ymin": 105, "xmax": 521, "ymax": 127}}, ] ], ) @require_torch @slow def test_large_model_pt(self): object_detector = pipeline("zero-shot-object-detection") outputs = object_detector( "http://images.cocodataset.org/val2017/000000039769.jpg", candidate_labels=["cat", "remote", "couch"], ) self.assertEqual( nested_simplify(outputs, decimals=4), [ {"score": 0.2868, "label": "cat", "box": {"xmin": 324, "ymin": 20, "xmax": 640, "ymax": 373}}, {"score": 0.277, "label": "remote", "box": {"xmin": 40, "ymin": 72, "xmax": 177, "ymax": 115}}, {"score": 0.2537, "label": "cat", "box": {"xmin": 1, "ymin": 55, "xmax": 315, "ymax": 472}}, {"score": 0.1474, "label": "remote", "box": {"xmin": 335, "ymin": 74, "xmax": 371, "ymax": 187}}, {"score": 0.1208, "label": "couch", "box": {"xmin": 4, "ymin": 0, "xmax": 642, "ymax": 476}}, ], ) outputs = object_detector( [ { "image": "http://images.cocodataset.org/val2017/000000039769.jpg", "candidate_labels": ["cat", "remote", "couch"], }, { "image": "http://images.cocodataset.org/val2017/000000039769.jpg", "candidate_labels": ["cat", "remote", "couch"], }, ], ) self.assertEqual( nested_simplify(outputs, decimals=4), [ [ {"score": 0.2868, "label": "cat", "box": {"xmin": 324, "ymin": 20, "xmax": 640, "ymax": 373}}, {"score": 0.277, "label": "remote", "box": {"xmin": 40, "ymin": 72, "xmax": 177, "ymax": 115}}, {"score": 0.2537, "label": "cat", "box": {"xmin": 1, "ymin": 55, "xmax": 315, "ymax": 472}}, {"score": 0.1474, "label": "remote", "box": {"xmin": 335, "ymin": 74, "xmax": 371, "ymax": 187}}, {"score": 0.1208, "label": "couch", "box": {"xmin": 4, "ymin": 0, "xmax": 642, "ymax": 476}}, ], [ {"score": 0.2868, "label": "cat", "box": {"xmin": 324, "ymin": 20, "xmax": 640, "ymax": 373}}, {"score": 0.277, "label": "remote", "box": {"xmin": 40, "ymin": 72, "xmax": 177, "ymax": 115}}, {"score": 0.2537, "label": "cat", "box": {"xmin": 1, "ymin": 55, "xmax": 315, "ymax": 472}}, {"score": 0.1474, "label": "remote", "box": {"xmin": 335, "ymin": 74, "xmax": 371, "ymax": 187}}, {"score": 0.1208, "label": "couch", "box": {"xmin": 4, "ymin": 0, "xmax": 642, "ymax": 476}}, ], ], ) @require_torch @slow def test_threshold(self): threshold = 0.2 object_detector = pipeline("zero-shot-object-detection") outputs = object_detector( "http://images.cocodataset.org/val2017/000000039769.jpg", candidate_labels=["cat", "remote", "couch"], threshold=threshold, ) self.assertEqual( nested_simplify(outputs, decimals=4), [ {"score": 0.2868, "label": "cat", "box": {"xmin": 324, "ymin": 20, "xmax": 640, "ymax": 373}}, {"score": 0.277, "label": "remote", "box": {"xmin": 40, "ymin": 72, "xmax": 177, "ymax": 115}}, {"score": 0.2537, "label": "cat", "box": {"xmin": 1, "ymin": 55, "xmax": 315, "ymax": 472}}, ], ) @require_torch @slow def test_top_k(self): top_k = 2 object_detector = pipeline("zero-shot-object-detection") outputs = object_detector( "http://images.cocodataset.org/val2017/000000039769.jpg", candidate_labels=["cat", "remote", "couch"], top_k=top_k, ) self.assertEqual( nested_simplify(outputs, decimals=4), [ {"score": 0.2868, "label": "cat", "box": {"xmin": 324, "ymin": 20, "xmax": 640, "ymax": 373}}, {"score": 0.277, "label": "remote", "box": {"xmin": 40, "ymin": 72, "xmax": 177, "ymax": 115}}, ], )
transformers/tests/pipelines/test_pipelines_zero_shot_object_detection.py/0
{ "file_path": "transformers/tests/pipelines/test_pipelines_zero_shot_object_detection.py", "repo_id": "transformers", "token_count": 5132 }
609
# Copyright 2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc import unittest from unittest.mock import patch from transformers import AutoTokenizer, GptOssForCausalLM, Mxfp4Config from transformers.testing_utils import ( require_kernels, require_torch, require_torch_gpu, require_torch_large_gpu, require_triton, slow, ) from transformers.utils import ( is_torch_available, ) if is_torch_available(): import torch class Mxfp4ConfigTest(unittest.TestCase): def test_basic_config_creation(self): """Test basic configuration creation with default values""" config = Mxfp4Config() self.assertEqual(config.quant_method.value, "mxfp4") self.assertIsNone(config.modules_to_not_convert) self.assertFalse(config.dequantize) def test_config_with_modules_to_not_convert(self): """Test configuration with modules to not convert""" modules = ["model.layers.*.self_attn", "lm_head"] config = Mxfp4Config(modules_to_not_convert=modules) self.assertEqual(config.modules_to_not_convert, modules) def test_config_with_dequantize(self): """Test configuration with dequantize enabled""" config = Mxfp4Config(dequantize=True) self.assertTrue(config.dequantize) def test_get_loading_attributes(self): """Test get_loading_attributes method""" config = Mxfp4Config(dequantize=True) attrs = config.get_loading_attributes() self.assertEqual(attrs, {"dequantize": True}) def test_to_dict(self): """Test configuration serialization to dict""" config = Mxfp4Config(modules_to_not_convert=["lm_head"], dequantize=True) config_dict = config.to_dict() self.assertEqual(config_dict["quant_method"], "mxfp4") self.assertEqual(config_dict["modules_to_not_convert"], ["lm_head"]) self.assertTrue(config_dict["dequantize"]) def test_from_dict(self): """Test configuration creation from dict""" config_dict = {"quant_method": "mxfp4", "modules_to_not_convert": ["lm_head"], "dequantize": True} config = Mxfp4Config.from_dict(config_dict) self.assertEqual(config.modules_to_not_convert, ["lm_head"]) self.assertTrue(config.dequantize) class Mxfp4QuantizerTest(unittest.TestCase): """Test the Mxfp4HfQuantizer class""" def setUp(self): gc.collect() if torch.cuda.is_available(): torch.cuda.empty_cache() def test_quantizer_validation_no_torch(self): """Test quantizer validation when torch is not available""" with patch("transformers.quantizers.quantizer_mxfp4.is_torch_available", return_value=False): from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config() quantizer = Mxfp4HfQuantizer(config) with self.assertRaises(ImportError): quantizer.validate_environment() def test_quantizer_validation_no_cuda(self): """Test quantizer validation when CUDA is not available""" with patch("torch.cuda.is_available", return_value=False): from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config() quantizer = Mxfp4HfQuantizer(config) with self.assertRaises(RuntimeError): quantizer.validate_environment() def test_quantizer_validation_low_compute_capability(self): """Test quantizer validation with low compute capability""" with patch("torch.cuda.get_device_capability", return_value=(7, 0)): from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config() quantizer = Mxfp4HfQuantizer(config) quantizer.pre_quantized = False with self.assertRaises(ValueError): quantizer.validate_environment() def test_quantizer_validation_low_compute_capability_with_prequantized(self): """Test quantizer validation with low compute capability""" with patch("torch.cuda.get_device_capability", return_value=(7, 0)): from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config() quantizer = Mxfp4HfQuantizer(config) # Should automatically set dequantize=True and warn quantizer.validate_environment() self.assertTrue(quantizer.quantization_config.dequantize) def test_quantizer_validation_low_compute_capability_with_dequantize(self): """Test quantizer validation with low compute capability but dequantize enabled""" with patch("torch.cuda.get_device_capability", return_value=(7, 0)): from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config(dequantize=True) quantizer = Mxfp4HfQuantizer(config) # Should not raise error with dequantize=True try: quantizer.validate_environment() except ValueError as e: if "compute capability" in str(e): self.fail("Should not raise compute capability error when dequantize=True") def test_quantizer_validation_dequantize_on_cpu(self): """Test quantizer validation with dequantize enabled on CPU-only environment""" with patch("torch.cuda.is_available", return_value=False): from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config(dequantize=True) quantizer = Mxfp4HfQuantizer(config) # Should not raise error when dequantize=True even without CUDA try: quantizer.validate_environment() except RuntimeError as e: if "requires a GPU" in str(e): self.fail("Should not raise GPU requirement error when dequantize=True on CPU") def test_quantizer_validation_order_dequantize_before_cuda_check(self): """Test that dequantize check happens before CUDA availability check""" # Mock both torch.cuda.is_available and is_accelerate_available to return False with ( patch("torch.cuda.is_available", return_value=False), patch( "transformers.quantizers.quantizer_mxfp4.is_accelerate_available", return_value=False, ), ): from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer # Test with dequantize=True - should pass even without CUDA and accelerate config = Mxfp4Config(dequantize=True) quantizer = Mxfp4HfQuantizer(config) # This should not raise any error because dequantize check comes first try: quantizer.validate_environment() except (RuntimeError, ImportError) as e: if "requires a GPU" in str(e) or "requires Accelerate" in str(e): self.fail(f"Should not raise error when dequantize=True: {e}") # Test with dequantize=False - should still fail due to missing CUDA config = Mxfp4Config(dequantize=False) quantizer = Mxfp4HfQuantizer(config) with self.assertRaises(RuntimeError) as context: quantizer.validate_environment() self.assertIn("requires a GPU", str(context.exception)) def test_quantizer_validation_missing_triton(self): """Test quantizer validation when triton is not available""" with ( patch("transformers.quantizers.quantizer_mxfp4.is_triton_available", return_value=False), patch("transformers.quantizers.quantizer_mxfp4.is_kernels_availalble", return_value=False), ): from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config() quantizer = Mxfp4HfQuantizer(config) quantizer.pre_quantized = False with self.assertRaises(ValueError): quantizer.validate_environment() def test_quantizer_validation_missing_triton_pre_quantized_no_dequantize(self): """Test quantizer validation when triton is not available but model is pre-quantized and dequantize is False""" with ( patch("transformers.quantizers.quantizer_mxfp4.is_triton_available", return_value=False), patch("transformers.quantizers.quantizer_mxfp4.is_kernels_availalble", return_value=False), ): from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config() quantizer = Mxfp4HfQuantizer(config) quantizer.pre_quantized = True # Should automatically set dequantize=True and warn quantizer.validate_environment() self.assertTrue(quantizer.quantization_config.dequantize) def test_update_dtype(self): """Test torch dtype updating""" from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config() quantizer = Mxfp4HfQuantizer(config) # Should default to bfloat16 result_dtype = quantizer.update_dtype(None) self.assertEqual(result_dtype, torch.bfloat16) # Should preserve existing dtype result_dtype = quantizer.update_dtype(torch.float32) self.assertEqual(result_dtype, torch.float32) def test_update_expected_keys(self): """Test expected keys updating for quantized models""" from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config() quantizer = Mxfp4HfQuantizer(config) expected_keys = [ "model.layers.0.mlp.experts.gate_up_proj", "model.layers.0.mlp.experts.down_proj", "model.embed_tokens.weight", ] updated_keys = quantizer.update_expected_keys(None, expected_keys, []) expected_updated = [ "model.layers.0.mlp.experts.gate_up_proj_blocks", "model.layers.0.mlp.experts.gate_up_proj_scales", "model.layers.0.mlp.experts.down_proj_blocks", "model.layers.0.mlp.experts.down_proj_scales", "model.embed_tokens.weight", ] self.assertEqual(set(updated_keys), set(expected_updated)) def test_update_param_name_dequantize(self): """Test parameter name updating when dequantizing""" from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config(dequantize=True) quantizer = Mxfp4HfQuantizer(config) # Should remove _blocks suffix param_name = "model.layers.0.mlp.experts.gate_up_proj_blocks" updated_name = quantizer.update_param_name(param_name) self.assertEqual(updated_name, "model.layers.0.mlp.experts.gate_up_proj") # Should remove _scales suffix param_name = "model.layers.0.mlp.experts.down_proj_scales" updated_name = quantizer.update_param_name(param_name) self.assertEqual(updated_name, "model.layers.0.mlp.experts.down_proj") # Should not change other names param_name = "model.embed_tokens.weight" updated_name = quantizer.update_param_name(param_name) self.assertEqual(updated_name, "model.embed_tokens.weight") def test_update_param_name_no_dequantize(self): """Test parameter name updating when not dequantizing""" from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config(dequantize=False) quantizer = Mxfp4HfQuantizer(config) param_name = "model.layers.0.mlp.experts.gate_up_proj_blocks" updated_name = quantizer.update_param_name(param_name) self.assertEqual(updated_name, param_name) def test_is_serializable(self): """Test serialization capability""" from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config() quantizer = Mxfp4HfQuantizer(config) # MXFP4 is not serializable with safetensors self.assertFalse(quantizer.is_serializable()) def test_is_trainable(self): """Test trainability""" from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config() quantizer = Mxfp4HfQuantizer(config) # MXFP4 is not trainable self.assertFalse(quantizer.is_trainable) class Mxfp4IntegrationTest(unittest.TestCase): """Test mxfp4 integration functions""" def test_should_convert_module(self): """Test module conversion decision logic""" from transformers.integrations.mxfp4 import should_convert_module # Should convert by default self.assertTrue(should_convert_module(["model", "layers", "0", "mlp"], [])) # Should not convert if in exclusion list patterns = ["model.layers.*.self_attn", "lm_head"] self.assertFalse(should_convert_module(["model", "layers", "0", "self_attn"], patterns)) self.assertFalse(should_convert_module(["lm_head"], patterns)) # Should convert if not in exclusion list self.assertTrue(should_convert_module(["model", "layers", "0", "mlp", "experts"], patterns)) @require_torch def test_convert_moe_packed_tensors(self): """Test unpacking of quantized tensors""" from transformers.integrations.mxfp4 import convert_moe_packed_tensors # Create dummy packed tensors blocks = torch.randint(0, 255, (2, 4, 8), dtype=torch.uint8) scales = torch.randint(100, 150, (2, 4), dtype=torch.uint8) result = convert_moe_packed_tensors(blocks, scales, dtype=torch.bfloat16) # Check output shape - should be [2, 4, 16] (8 * 2 for unpacking) self.assertEqual(result.shape, (2, 4 * 16)) self.assertEqual(result.dtype, torch.bfloat16) @require_triton(min_version="3.4.0") @require_kernels @require_torch_gpu @require_torch def test_quantize_to_mxfp4(self): """Test quantization function""" from transformers.integrations.mxfp4 import quantize_to_mxfp4 # Create dummy weight tensor w = torch.randn(32, 64, 128, dtype=torch.bfloat16, device=torch.device("cuda")) quantized_w, flex_data, mx_ctx = quantize_to_mxfp4(w, None, None) # Check that shapes are reasonable self.assertEqual(quantized_w.dtype, torch.uint8) self.assertIsNotNone(flex_data) self.assertIsNotNone(mx_ctx) @require_torch @require_torch_large_gpu @require_triton(min_version="3.4.0") @require_kernels @slow class Mxfp4ModelTest(unittest.TestCase): """Test mxfp4 with actual models (requires specific model and hardware)""" # These should be paths to real OpenAI MoE models for proper testing model_name = "openai/gpt-oss-20b" input_text = "Once upon a time" # Expected outputs for generation tests EXPECTED_OUTPUTS = set() EXPECTED_OUTPUTS.add("Once upon a time, in a small village, there lived a young") def setUp(self): gc.collect() if torch.cuda.is_available(): torch.cuda.empty_cache() def tearDown(self): gc.collect() if torch.cuda.is_available(): torch.cuda.empty_cache() def check_inference_correctness_quantized(self, model, tokenizer): # Check that inference pass works on the model encoded_input = tokenizer(self.input_text, return_tensors="pt").to(model.device) # Set pad token if not set if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token with torch.no_grad(): output_sequences = model.generate( **encoded_input, max_new_tokens=10, do_sample=False, pad_token_id=tokenizer.eos_token_id, use_cache=False, ) generated_text = tokenizer.decode(output_sequences[0], skip_special_tokens=True) self.assertIn(generated_text, self.EXPECTED_OUTPUTS) def test_gpt_oss_model_loading_quantized_with_device_map(self): """Test loading OpenAI MoE model with mxfp4 quantization and device_map""" quantization_config = Mxfp4Config(dequantize=False) # Test that config is properly set up self.assertFalse(quantization_config.dequantize) model = GptOssForCausalLM.from_pretrained( self.model_name, quantization_config=quantization_config, dtype=torch.bfloat16, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(self.model_name) self.check_inference_correctness_quantized(model, tokenizer) def test_gpt_oss_model_loading_dequantized_with_device_map(self): """Test loading OpenAI MoE model with mxfp4 dequantization and device_map""" quantization_config = Mxfp4Config(dequantize=True) # Test that config is properly set up self.assertTrue(quantization_config.dequantize) model = GptOssForCausalLM.from_pretrained( self.model_name, quantization_config=quantization_config, dtype=torch.bfloat16, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(self.model_name) self.check_inference_correctness_quantized(model, tokenizer) def test_model_device_map_validation(self): """Test device map validation""" from transformers.quantizers.quantizer_mxfp4 import Mxfp4HfQuantizer config = Mxfp4Config() quantizer = Mxfp4HfQuantizer(config) quantizer.pre_quantized = False # Test with CPU in device map (should raise error for non-pre-quantized) with self.assertRaises(ValueError): quantizer.validate_environment(device_map={"": "cpu"}) def test_memory_footprint_comparison(self): """Test memory footprint differences between quantized and unquantized models""" # Expected: quantized < dequantized < unquantized memory usage quantization_config = Mxfp4Config(dequantize=True) quantized_model = GptOssForCausalLM.from_pretrained( self.model_name, dtype=torch.bfloat16, device_map="auto", ) dequantized_model = GptOssForCausalLM.from_pretrained( self.model_name, dtype=torch.bfloat16, device_map="auto", quantization_config=quantization_config, ) quantized_mem = quantized_model.get_memory_footprint() dequantized_mem = dequantized_model.get_memory_footprint() self.assertLess(quantized_mem, dequantized_mem)
transformers/tests/quantization/mxfp4/test_mxfp4.py/0
{ "file_path": "transformers/tests/quantization/mxfp4/test_mxfp4.py", "repo_id": "transformers", "token_count": 8182 }
610
# Testing new Hugging Face Deep Learning Container. This document explains the testing strategy for releasing the new Hugging Face Deep Learning Container. AWS maintains 14 days of currency with framework releases. Besides framework releases, AWS release train is bi-weekly on Monday. Code cutoff date for any changes is the Wednesday before release-Monday. ## Test Case 1: Releasing a New Version (Minor/Major) of 🤗 Transformers ### Requirements: Test should run on Release Candidate for new `transformers` release to validate the new release is compatible with the DLCs. To run these tests you need credentials for the HF SageMaker AWS Account. You can ask @philschmid or @n1t0 to get access. ### Run Tests: Before we can run the tests we need to adjust the `requirements.txt` for PyTorch under `/tests/sagemaker/scripts/pytorch` and for TensorFlow under `/tests/sagemaker/scripts/pytorch`. We adjust the branch to the new RC-tag. ``` git+https://github.com/huggingface/transformers.git@v4.5.0.rc0 # install main or adjust ist with vX.X.X for installing version specific-transforms ``` After we adjusted the `requirements.txt` we can run Amazon SageMaker tests with: ```bash AWS_PROFILE=<enter-your-profile> make test-sagemaker ``` These tests take around 10-15 minutes to finish. Preferably make a screenshot of the successfully ran tests. ### After Transformers Release: After we have released the Release Candidate we need to create a PR at the [Deep Learning Container Repository](https://github.com/aws/deep-learning-containers). **Creating the update PR:** 1. Update the two latest `buildspec.yaml` config for [PyTorch](https://github.com/aws/deep-learning-containers/tree/master/huggingface/pytorch) and [TensorFlow](https://github.com/aws/deep-learning-containers/tree/master/huggingface/tensorflow). The two latest `buildspec.yaml` are the `buildspec.yaml` without a version tag and the one with the highest framework version, e.g. `buildspec-1-7-1.yml` and not `buildspec-1-6.yml`. To update the `buildspec.yaml` we need to adjust either the `transformers_version` or the `datasets_version` or both. Example for upgrading to `transformers 4.5.0` and `datasets 1.6.0`. ```yaml account_id: &ACCOUNT_ID <set-$ACCOUNT_ID-in-environment> region: &REGION <set-$REGION-in-environment> base_framework: &BASE_FRAMEWORK pytorch framework: &FRAMEWORK !join [ "huggingface_", *BASE_FRAMEWORK] version: &VERSION 1.6.0 short_version: &SHORT_VERSION 1.6 repository_info: training_repository: &TRAINING_REPOSITORY image_type: &TRAINING_IMAGE_TYPE training root: !join [ "huggingface/", *BASE_FRAMEWORK, "/", *TRAINING_IMAGE_TYPE ] repository_name: &REPOSITORY_NAME !join ["pr", "-", "huggingface", "-", *BASE_FRAMEWORK, "-", *TRAINING_IMAGE_TYPE] repository: &REPOSITORY !join [ *ACCOUNT_ID, .dkr.ecr., *REGION, .amazonaws.com/, *REPOSITORY_NAME ] images: BuildHuggingFacePytorchGpuPy37Cu110TrainingDockerImage: <<: *TRAINING_REPOSITORY build: &HUGGINGFACE_PYTORCH_GPU_TRAINING_PY3 false image_size_baseline: &IMAGE_SIZE_BASELINE 15000 device_type: &DEVICE_TYPE gpu python_version: &DOCKER_PYTHON_VERSION py3 tag_python_version: &TAG_PYTHON_VERSION py36 cuda_version: &CUDA_VERSION cu110 os_version: &OS_VERSION ubuntu18.04 transformers_version: &TRANSFORMERS_VERSION 4.5.0 # this was adjusted from 4.4.2 to 4.5.0 datasets_version: &DATASETS_VERSION 1.6.0 # this was adjusted from 1.5.0 to 1.6.0 tag: !join [ *VERSION, '-', 'transformers', *TRANSFORMERS_VERSION, '-', *DEVICE_TYPE, '-', *TAG_PYTHON_VERSION, '-', *CUDA_VERSION, '-', *OS_VERSION ] docker_file: !join [ docker/, *SHORT_VERSION, /, *DOCKER_PYTHON_VERSION, /, *CUDA_VERSION, /Dockerfile., *DEVICE_TYPE ] ``` 2. In the PR comment describe what test, we ran and with which package versions. Here you can copy the table from [Current Tests](#current-tests). 2. In the PR comment describe what test we ran and with which framework versions. Here you can copy the table from [Current Tests](#current-tests). You can take a look at this [PR](https://github.com/aws/deep-learning-containers/pull/1016), which information are needed. ## Test Case 2: Releasing a New AWS Framework DLC ## Execute Tests ### Requirements: AWS is going to release new DLCs for PyTorch and/or TensorFlow. The Tests should run on the new framework versions with current `transformers` release to validate the new framework release is compatible with the `transformers` version. To run these tests you need credentials for the HF SageMaker AWS Account. You can ask @philschmid or @n1t0 to get access. AWS will notify us with a new issue in the repository pointing to their framework upgrade PR. ### Run Tests: Before we can run the tests we need to adjust the `requirements.txt` for Pytorch under `/tests/sagemaker/scripts/pytorch` and for Tensorflow under `/tests/sagemaker/scripts/pytorch`. We add the new framework version to it. ``` torch==1.8.1 # for pytorch tensorflow-gpu==2.5.0 # for tensorflow ``` After we adjusted the `requirements.txt` we can run Amazon SageMaker tests with. ```bash AWS_PROFILE=<enter-your-profile> make test-sagemaker ``` These tests take around 10-15 minutes to finish. Preferably make a screenshot of the successfully ran tests. ### After successful Tests: After we have successfully run tests for the new framework version we need to create a PR at the [Deep Learning Container Repository](https://github.com/aws/deep-learning-containers). **Creating the update PR:** 1. Create a new `buildspec.yaml` config for [PyTorch](https://github.com/aws/deep-learning-containers/tree/master/huggingface/pytorch) and [TensorFlow](https://github.com/aws/deep-learning-containers/tree/master/huggingface/tensorflow) and rename the old `buildspec.yaml` to `buildespec-x.x.x`, where `x.x.x` is the base framework version, e.g. if pytorch 1.6.0 is the latest version in `buildspec.yaml` the file should be renamed to `buildspec-yaml-1-6.yaml`. To create the new `buildspec.yaml` we need to adjust the `version` and the `short_version`. Example for upgrading to `pytorch 1.7.1`. ```yaml account_id: &ACCOUNT_ID <set-$ACCOUNT_ID-in-environment> region: &REGION <set-$REGION-in-environment> base_framework: &BASE_FRAMEWORK pytorch framework: &FRAMEWORK !join [ "huggingface_", *BASE_FRAMEWORK] version: &VERSION 1.7.1 # this was adjusted from 1.6.0 to 1.7.1 short_version: &SHORT_VERSION 1.7 # this was adjusted from 1.6 to 1.7 repository_info: training_repository: &TRAINING_REPOSITORY image_type: &TRAINING_IMAGE_TYPE training root: !join [ "huggingface/", *BASE_FRAMEWORK, "/", *TRAINING_IMAGE_TYPE ] repository_name: &REPOSITORY_NAME !join ["pr", "-", "huggingface", "-", *BASE_FRAMEWORK, "-", *TRAINING_IMAGE_TYPE] repository: &REPOSITORY !join [ *ACCOUNT_ID, .dkr.ecr., *REGION, .amazonaws.com/, *REPOSITORY_NAME ] images: BuildHuggingFacePytorchGpuPy37Cu110TrainingDockerImage: <<: *TRAINING_REPOSITORY build: &HUGGINGFACE_PYTORCH_GPU_TRAINING_PY3 false image_size_baseline: &IMAGE_SIZE_BASELINE 15000 device_type: &DEVICE_TYPE gpu python_version: &DOCKER_PYTHON_VERSION py3 tag_python_version: &TAG_PYTHON_VERSION py36 cuda_version: &CUDA_VERSION cu110 os_version: &OS_VERSION ubuntu18.04 transformers_version: &TRANSFORMERS_VERSION 4.4.2 datasets_version: &DATASETS_VERSION 1.5.0 tag: !join [ *VERSION, '-', 'transformers', *TRANSFORMERS_VERSION, '-', *DEVICE_TYPE, '-', *TAG_PYTHON_VERSION, '-', *CUDA_VERSION, '-', *OS_VERSION ] docker_file: !join [ docker/, *SHORT_VERSION, /, *DOCKER_PYTHON_VERSION, /, *CUDA_VERSION, /Dockerfile., *DEVICE_TYPE ] ``` 2. In the PR comment describe what test we ran and with which framework versions. Here you can copy the table from [Current Tests](#current-tests). You can take a look at this [PR](https://github.com/aws/deep-learning-containers/pull/1025), which information are needed. ## Current Tests | ID | Description | Platform | #GPUs | Collected & evaluated metrics | |-------------------------------------|-------------------------------------------------------------------|-----------------------------|-------|------------------------------------------| | pytorch-transfromers-test-single | test bert finetuning using BERT fromtransformerlib+PT | SageMaker createTrainingJob | 1 | train_runtime, eval_accuracy & eval_loss | | pytorch-transfromers-test-2-ddp | test bert finetuning using BERT from transformer lib+ PT DPP | SageMaker createTrainingJob | 16 | train_runtime, eval_accuracy & eval_loss | | pytorch-transfromers-test-2-smd | test bert finetuning using BERT from transformer lib+ PT SM DDP | SageMaker createTrainingJob | 16 | train_runtime, eval_accuracy & eval_loss | | pytorch-transfromers-test-1-smp | test roberta finetuning using BERT from transformer lib+ PT SM MP | SageMaker createTrainingJob | 8 | train_runtime, eval_accuracy & eval_loss | | tensorflow-transfromers-test-single | Test bert finetuning using BERT from transformer lib+TF | SageMaker createTrainingJob | 1 | train_runtime, eval_accuracy & eval_loss | | tensorflow-transfromers-test-2-smd | test bert finetuning using BERT from transformer lib+ TF SM DDP | SageMaker createTrainingJob | 16 | train_runtime, eval_accuracy & eval_loss |
transformers/tests/sagemaker/README.md/0
{ "file_path": "transformers/tests/sagemaker/README.md", "repo_id": "transformers", "token_count": 3293 }
611
# Copyright 2023 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import json import os import pathlib import tempfile import time import warnings from copy import deepcopy import numpy as np import pytest import requests from packaging import version from transformers import AutoImageProcessor, BatchFeature from transformers.image_utils import AnnotationFormat, AnnotionFormat from transformers.testing_utils import ( check_json_file_has_correct_format, is_flaky, require_torch, require_torch_accelerator, require_vision, slow, torch_device, ) from transformers.utils import is_torch_available, is_torchvision_available, is_vision_available if is_torchvision_available(): from transformers.image_processing_utils_fast import BaseImageProcessorFast if is_torch_available(): import torch if is_vision_available(): from PIL import Image def prepare_image_inputs( batch_size, min_resolution, max_resolution, num_channels, size_divisor=None, equal_resolution=False, numpify=False, torchify=False, ): """This function prepares a list of PIL images, or a list of numpy arrays if one specifies numpify=True, or a list of PyTorch tensors if one specifies torchify=True. One can specify whether the images are of the same resolution or not. """ assert not (numpify and torchify), "You cannot specify both numpy and PyTorch tensors at the same time" image_inputs = [] for i in range(batch_size): if equal_resolution: width = height = max_resolution else: # To avoid getting image width/height 0 if size_divisor is not None: # If `size_divisor` is defined, the image needs to have width/size >= `size_divisor` min_resolution = max(size_divisor, min_resolution) width, height = np.random.choice(np.arange(min_resolution, max_resolution), 2) image_inputs.append(np.random.randint(255, size=(num_channels, width, height), dtype=np.uint8)) if not numpify and not torchify: # PIL expects the channel dimension as last dimension image_inputs = [Image.fromarray(np.moveaxis(image, 0, -1)) for image in image_inputs] if torchify: image_inputs = [torch.from_numpy(image) for image in image_inputs] if numpify: # Numpy images are typically in channels last format image_inputs = [image.transpose(1, 2, 0) for image in image_inputs] return image_inputs def prepare_video(num_frames, num_channels, width=10, height=10, numpify=False, torchify=False): """This function prepares a video as a list of PIL images/NumPy arrays/PyTorch tensors.""" video = [] for i in range(num_frames): video.append(np.random.randint(255, size=(num_channels, width, height), dtype=np.uint8)) if not numpify and not torchify: # PIL expects the channel dimension as last dimension video = [Image.fromarray(np.moveaxis(frame, 0, -1)) for frame in video] if torchify: video = [torch.from_numpy(frame) for frame in video] return video def prepare_video_inputs( batch_size, num_frames, num_channels, min_resolution, max_resolution, equal_resolution=False, numpify=False, torchify=False, ): """This function prepares a batch of videos: a list of list of PIL images, or a list of list of numpy arrays if one specifies numpify=True, or a list of list of PyTorch tensors if one specifies torchify=True. One can specify whether the videos are of the same resolution or not. """ assert not (numpify and torchify), "You cannot specify both numpy and PyTorch tensors at the same time" video_inputs = [] for _ in range(batch_size): if equal_resolution: width = height = max_resolution else: width, height = np.random.choice(np.arange(min_resolution, max_resolution), 2) video = prepare_video( num_frames=num_frames, num_channels=num_channels, width=width, height=height, numpify=numpify, torchify=torchify, ) video_inputs.append(video) return video_inputs class ImageProcessingTestMixin: test_cast_dtype = None image_processing_class = None fast_image_processing_class = None image_processors_list = None test_slow_image_processor = True test_fast_image_processor = True def setUp(self): image_processor_list = [] if self.test_slow_image_processor and self.image_processing_class: image_processor_list.append(self.image_processing_class) if self.test_fast_image_processor and self.fast_image_processing_class: image_processor_list.append(self.fast_image_processing_class) self.image_processor_list = image_processor_list def _assert_slow_fast_tensors_equivalence(self, slow_tensor, fast_tensor, atol=1e-1, rtol=1e-3, mean_atol=5e-3): torch.testing.assert_close(slow_tensor, fast_tensor, atol=atol, rtol=rtol) self.assertLessEqual(torch.mean(torch.abs(slow_tensor - fast_tensor)).item(), mean_atol) @require_vision @require_torch def test_slow_fast_equivalence(self): if not self.test_slow_image_processor or not self.test_fast_image_processor: self.skipTest(reason="Skipping slow/fast equivalence test") if self.image_processing_class is None or self.fast_image_processing_class is None: self.skipTest(reason="Skipping slow/fast equivalence test as one of the image processors is not defined") dummy_image = Image.open( requests.get("http://images.cocodataset.org/val2017/000000039769.jpg", stream=True).raw ) image_processor_slow = self.image_processing_class(**self.image_processor_dict) image_processor_fast = self.fast_image_processing_class(**self.image_processor_dict) encoding_slow = image_processor_slow(dummy_image, return_tensors="pt") encoding_fast = image_processor_fast(dummy_image, return_tensors="pt") self._assert_slow_fast_tensors_equivalence(encoding_slow.pixel_values, encoding_fast.pixel_values) @require_vision @require_torch def test_slow_fast_equivalence_batched(self): if not self.test_slow_image_processor or not self.test_fast_image_processor: self.skipTest(reason="Skipping slow/fast equivalence test") if self.image_processing_class is None or self.fast_image_processing_class is None: self.skipTest(reason="Skipping slow/fast equivalence test as one of the image processors is not defined") if hasattr(self.image_processor_tester, "do_center_crop") and self.image_processor_tester.do_center_crop: self.skipTest( reason="Skipping as do_center_crop is True and center_crop functions are not equivalent for fast and slow processors" ) dummy_images = self.image_processor_tester.prepare_image_inputs(equal_resolution=False, torchify=True) image_processor_slow = self.image_processing_class(**self.image_processor_dict) image_processor_fast = self.fast_image_processing_class(**self.image_processor_dict) encoding_slow = image_processor_slow(dummy_images, return_tensors="pt") encoding_fast = image_processor_fast(dummy_images, return_tensors="pt") self._assert_slow_fast_tensors_equivalence(encoding_slow.pixel_values, encoding_fast.pixel_values) @require_vision @require_torch @is_flaky() def test_fast_is_faster_than_slow(self): if not self.test_slow_image_processor or not self.test_fast_image_processor: self.skipTest(reason="Skipping speed test") if self.image_processing_class is None or self.fast_image_processing_class is None: self.skipTest(reason="Skipping speed test as one of the image processors is not defined") def measure_time(image_processor, image): # Warmup for _ in range(5): _ = image_processor(image, return_tensors="pt") all_times = [] for _ in range(10): start = time.time() _ = image_processor(image, return_tensors="pt") all_times.append(time.time() - start) # Take the average of the fastest 3 runs avg_time = sum(sorted(all_times[:3])) / 3.0 return avg_time dummy_images = [torch.randint(0, 255, (3, 224, 224), dtype=torch.uint8) for _ in range(4)] image_processor_slow = self.image_processing_class(**self.image_processor_dict) image_processor_fast = self.fast_image_processing_class(**self.image_processor_dict) fast_time = measure_time(image_processor_fast, dummy_images) slow_time = measure_time(image_processor_slow, dummy_images) self.assertLessEqual(fast_time, slow_time) def test_is_fast(self): for image_processing_class in self.image_processor_list: image_processor = image_processing_class(**self.image_processor_dict) # Check is_fast is set correctly if is_torchvision_available() and issubclass(image_processing_class, BaseImageProcessorFast): self.assertTrue(image_processor.is_fast) else: self.assertFalse(image_processor.is_fast) def test_image_processor_to_json_string(self): for image_processing_class in self.image_processor_list: image_processor = image_processing_class(**self.image_processor_dict) obj = json.loads(image_processor.to_json_string()) for key, value in self.image_processor_dict.items(): self.assertEqual(obj[key], value) def test_image_processor_to_json_file(self): for image_processing_class in self.image_processor_list: image_processor_first = image_processing_class(**self.image_processor_dict) with tempfile.TemporaryDirectory() as tmpdirname: json_file_path = os.path.join(tmpdirname, "image_processor.json") image_processor_first.to_json_file(json_file_path) image_processor_second = image_processing_class.from_json_file(json_file_path) self.assertEqual(image_processor_second.to_dict(), image_processor_first.to_dict()) def test_image_processor_from_and_save_pretrained(self): for image_processing_class in self.image_processor_list: image_processor_first = image_processing_class(**self.image_processor_dict) with tempfile.TemporaryDirectory() as tmpdirname: saved_file = image_processor_first.save_pretrained(tmpdirname)[0] check_json_file_has_correct_format(saved_file) image_processor_second = image_processing_class.from_pretrained(tmpdirname) self.assertEqual(image_processor_second.to_dict(), image_processor_first.to_dict()) def test_image_processor_save_load_with_autoimageprocessor(self): for i, image_processing_class in enumerate(self.image_processor_list): image_processor_first = image_processing_class(**self.image_processor_dict) with tempfile.TemporaryDirectory() as tmpdirname: saved_file = image_processor_first.save_pretrained(tmpdirname)[0] check_json_file_has_correct_format(saved_file) use_fast = i == 1 or not self.test_slow_image_processor image_processor_second = AutoImageProcessor.from_pretrained(tmpdirname, use_fast=use_fast) self.assertEqual(image_processor_second.to_dict(), image_processor_first.to_dict()) def test_save_load_fast_slow(self): "Test that we can load a fast image processor from a slow one and vice-versa." if self.image_processing_class is None or self.fast_image_processing_class is None: self.skipTest("Skipping slow/fast save/load test as one of the image processors is not defined") image_processor_dict = self.image_processor_tester.prepare_image_processor_dict() image_processor_slow_0 = self.image_processing_class(**image_processor_dict) # Load fast image processor from slow one with tempfile.TemporaryDirectory() as tmpdirname: image_processor_slow_0.save_pretrained(tmpdirname) image_processor_fast_0 = self.fast_image_processing_class.from_pretrained(tmpdirname) image_processor_fast_1 = self.fast_image_processing_class(**image_processor_dict) # Load slow image processor from fast one with tempfile.TemporaryDirectory() as tmpdirname: image_processor_fast_1.save_pretrained(tmpdirname) image_processor_slow_1 = self.image_processing_class.from_pretrained(tmpdirname) dict_slow_0 = image_processor_slow_0.to_dict() dict_slow_1 = image_processor_slow_1.to_dict() difference = { key: dict_slow_0.get(key) if key in dict_slow_0 else dict_slow_1.get(key) for key in set(dict_slow_0) ^ set(dict_slow_1) } dict_slow_0 = {key: dict_slow_0[key] for key in set(dict_slow_0) & set(dict_slow_1)} dict_slow_1 = {key: dict_slow_1[key] for key in set(dict_slow_0) & set(dict_slow_1)} # check that all additional keys are None, except for `default_to_square` and `data_format` which are only set in fast processors self.assertTrue( all(value is None for key, value in difference.items() if key not in ["default_to_square", "data_format"]) ) # check that the remaining keys are the same self.assertEqual(dict_slow_0, dict_slow_1) dict_fast_0 = image_processor_fast_0.to_dict() dict_fast_1 = image_processor_fast_1.to_dict() difference = { key: dict_fast_0.get(key) if key in dict_fast_0 else dict_fast_1.get(key) for key in set(dict_fast_0) ^ set(dict_fast_1) } dict_fast_0 = {key: dict_fast_0[key] for key in set(dict_fast_0) & set(dict_fast_1)} dict_fast_1 = {key: dict_fast_1[key] for key in set(dict_fast_0) & set(dict_fast_1)} # check that all additional keys are None, except for `default_to_square` and `data_format` which are only set in fast processors self.assertTrue( all(value is None for key, value in difference.items() if key not in ["default_to_square", "data_format"]) ) # check that the remaining keys are the same self.assertEqual(dict_fast_0, dict_fast_1) def test_save_load_fast_slow_auto(self): "Test that we can load a fast image processor from a slow one and vice-versa using AutoImageProcessor." if self.image_processing_class is None or self.fast_image_processing_class is None: self.skipTest("Skipping slow/fast save/load test as one of the image processors is not defined") image_processor_dict = self.image_processor_tester.prepare_image_processor_dict() image_processor_slow_0 = self.image_processing_class(**image_processor_dict) # Load fast image processor from slow one with tempfile.TemporaryDirectory() as tmpdirname: image_processor_slow_0.save_pretrained(tmpdirname) image_processor_fast_0 = AutoImageProcessor.from_pretrained(tmpdirname, use_fast=True) image_processor_fast_1 = self.fast_image_processing_class(**image_processor_dict) # Load slow image processor from fast one with tempfile.TemporaryDirectory() as tmpdirname: image_processor_fast_1.save_pretrained(tmpdirname) image_processor_slow_1 = AutoImageProcessor.from_pretrained(tmpdirname, use_fast=False) dict_slow_0 = image_processor_slow_0.to_dict() dict_slow_1 = image_processor_slow_1.to_dict() difference = { key: dict_slow_0.get(key) if key in dict_slow_0 else dict_slow_1.get(key) for key in set(dict_slow_0) ^ set(dict_slow_1) } dict_slow_0 = {key: dict_slow_0[key] for key in set(dict_slow_0) & set(dict_slow_1)} dict_slow_1 = {key: dict_slow_1[key] for key in set(dict_slow_0) & set(dict_slow_1)} # check that all additional keys are None, except for `default_to_square` and `data_format` which are only set in fast processors self.assertTrue( all(value is None for key, value in difference.items() if key not in ["default_to_square", "data_format"]) ) # check that the remaining keys are the same self.assertEqual(dict_slow_0, dict_slow_1) dict_fast_0 = image_processor_fast_0.to_dict() dict_fast_1 = image_processor_fast_1.to_dict() difference = { key: dict_fast_0.get(key) if key in dict_fast_0 else dict_fast_1.get(key) for key in set(dict_fast_0) ^ set(dict_fast_1) } dict_fast_0 = {key: dict_fast_0[key] for key in set(dict_fast_0) & set(dict_fast_1)} dict_fast_1 = {key: dict_fast_1[key] for key in set(dict_fast_0) & set(dict_fast_1)} # check that all additional keys are None, except for `default_to_square` and `data_format` which are only set in fast processors self.assertTrue( all(value is None for key, value in difference.items() if key not in ["default_to_square", "data_format"]) ) # check that the remaining keys are the same self.assertEqual(dict_fast_0, dict_fast_1) def test_init_without_params(self): for image_processing_class in self.image_processor_list: image_processor = image_processing_class() self.assertIsNotNone(image_processor) @require_torch @require_vision def test_cast_dtype_device(self): for image_processing_class in self.image_processor_list: if self.test_cast_dtype is not None: # Initialize image_processor image_processor = image_processing_class(**self.image_processor_dict) # create random PyTorch tensors image_inputs = self.image_processor_tester.prepare_image_inputs(equal_resolution=False, torchify=True) encoding = image_processor(image_inputs, return_tensors="pt") # for layoutLM compatibility self.assertEqual(encoding.pixel_values.device, torch.device("cpu")) self.assertEqual(encoding.pixel_values.dtype, torch.float32) encoding = image_processor(image_inputs, return_tensors="pt").to(torch.float16) self.assertEqual(encoding.pixel_values.device, torch.device("cpu")) self.assertEqual(encoding.pixel_values.dtype, torch.float16) encoding = image_processor(image_inputs, return_tensors="pt").to("cpu", torch.bfloat16) self.assertEqual(encoding.pixel_values.device, torch.device("cpu")) self.assertEqual(encoding.pixel_values.dtype, torch.bfloat16) with self.assertRaises(TypeError): _ = image_processor(image_inputs, return_tensors="pt").to(torch.bfloat16, "cpu") # Try with text + image feature encoding = image_processor(image_inputs, return_tensors="pt") encoding.update({"input_ids": torch.LongTensor([[1, 2, 3], [4, 5, 6]])}) encoding = encoding.to(torch.float16) self.assertEqual(encoding.pixel_values.device, torch.device("cpu")) self.assertEqual(encoding.pixel_values.dtype, torch.float16) self.assertEqual(encoding.input_ids.dtype, torch.long) def test_call_pil(self): for image_processing_class in self.image_processor_list: # Initialize image_processing image_processing = image_processing_class(**self.image_processor_dict) # create random PIL images image_inputs = self.image_processor_tester.prepare_image_inputs(equal_resolution=False) for image in image_inputs: self.assertIsInstance(image, Image.Image) # Test not batched input encoded_images = image_processing(image_inputs[0], return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape([image_inputs[0]]) self.assertEqual(tuple(encoded_images.shape), (1, *expected_output_image_shape)) # Test batched encoded_images = image_processing(image_inputs, return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_inputs) self.assertEqual( tuple(encoded_images.shape), (self.image_processor_tester.batch_size, *expected_output_image_shape) ) def test_call_numpy(self): for image_processing_class in self.image_processor_list: # Initialize image_processing image_processing = image_processing_class(**self.image_processor_dict) # create random numpy tensors image_inputs = self.image_processor_tester.prepare_image_inputs(equal_resolution=False, numpify=True) for image in image_inputs: self.assertIsInstance(image, np.ndarray) # Test not batched input encoded_images = image_processing(image_inputs[0], return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape([image_inputs[0]]) self.assertEqual(tuple(encoded_images.shape), (1, *expected_output_image_shape)) # Test batched encoded_images = image_processing(image_inputs, return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_inputs) self.assertEqual( tuple(encoded_images.shape), (self.image_processor_tester.batch_size, *expected_output_image_shape) ) def test_call_pytorch(self): for image_processing_class in self.image_processor_list: # Initialize image_processing image_processing = image_processing_class(**self.image_processor_dict) # create random PyTorch tensors image_inputs = self.image_processor_tester.prepare_image_inputs(equal_resolution=False, torchify=True) for image in image_inputs: self.assertIsInstance(image, torch.Tensor) # Test not batched input encoded_images = image_processing(image_inputs[0], return_tensors="pt").pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape([image_inputs[0]]) self.assertEqual(tuple(encoded_images.shape), (1, *expected_output_image_shape)) # Test batched expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_inputs) encoded_images = image_processing(image_inputs, return_tensors="pt").pixel_values self.assertEqual( tuple(encoded_images.shape), (self.image_processor_tester.batch_size, *expected_output_image_shape), ) def test_call_numpy_4_channels(self): for image_processing_class in self.image_processor_list: # Test that can process images which have an arbitrary number of channels # Initialize image_processing image_processor = image_processing_class(**self.image_processor_dict) # create random numpy tensors self.image_processor_tester.num_channels = 4 image_inputs = self.image_processor_tester.prepare_image_inputs(equal_resolution=False, numpify=True) # Test not batched input encoded_images = image_processor( image_inputs[0], return_tensors="pt", input_data_format="channels_last", image_mean=0, image_std=1, ).pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape([image_inputs[0]]) self.assertEqual(tuple(encoded_images.shape), (1, *expected_output_image_shape)) # Test batched encoded_images = image_processor( image_inputs, return_tensors="pt", input_data_format="channels_last", image_mean=0, image_std=1, ).pixel_values expected_output_image_shape = self.image_processor_tester.expected_output_image_shape(image_inputs) self.assertEqual( tuple(encoded_images.shape), (self.image_processor_tester.batch_size, *expected_output_image_shape) ) def test_image_processor_preprocess_arguments(self): is_tested = False for image_processing_class in self.image_processor_list: image_processor = image_processing_class(**self.image_processor_dict) # validation done by _valid_processor_keys attribute if hasattr(image_processor, "_valid_processor_keys") and hasattr(image_processor, "preprocess"): preprocess_parameter_names = inspect.getfullargspec(image_processor.preprocess).args preprocess_parameter_names.remove("self") preprocess_parameter_names.sort() valid_processor_keys = image_processor._valid_processor_keys valid_processor_keys.sort() self.assertEqual(preprocess_parameter_names, valid_processor_keys) is_tested = True # validation done by @filter_out_non_signature_kwargs decorator if hasattr(image_processor.preprocess, "_filter_out_non_signature_kwargs"): if hasattr(self.image_processor_tester, "prepare_image_inputs"): inputs = self.image_processor_tester.prepare_image_inputs() elif hasattr(self.image_processor_tester, "prepare_video_inputs"): inputs = self.image_processor_tester.prepare_video_inputs() else: self.skipTest(reason="No valid input preparation method found") with warnings.catch_warnings(record=True) as raised_warnings: warnings.simplefilter("always") image_processor(inputs, extra_argument=True) messages = " ".join([str(w.message) for w in raised_warnings]) self.assertGreaterEqual(len(raised_warnings), 1) self.assertIn("extra_argument", messages) is_tested = True if not is_tested: self.skipTest(reason="No validation found for `preprocess` method") def test_override_instance_attributes_does_not_affect_other_instances(self): if self.fast_image_processing_class is None: self.skipTest( "Only testing fast image processor, as most slow processors break this test and are to be deprecated" ) image_processing_class = self.fast_image_processing_class image_processor_1 = image_processing_class() image_processor_2 = image_processing_class() if not (hasattr(image_processor_1, "size") and isinstance(image_processor_1.size, dict)) or not ( hasattr(image_processor_1, "image_mean") and isinstance(image_processor_1.image_mean, list) ): self.skipTest( reason="Skipping test as the image processor does not have dict size or list image_mean attributes" ) original_size_2 = deepcopy(image_processor_2.size) for key in image_processor_1.size: image_processor_1.size[key] = -1 modified_copied_size_1 = deepcopy(image_processor_1.size) original_image_mean_2 = deepcopy(image_processor_2.image_mean) image_processor_1.image_mean[0] = -1 modified_copied_image_mean_1 = deepcopy(image_processor_1.image_mean) # check that the original attributes of the second instance are not affected self.assertEqual(image_processor_2.size, original_size_2) self.assertEqual(image_processor_2.image_mean, original_image_mean_2) for key in image_processor_2.size: image_processor_2.size[key] = -2 image_processor_2.image_mean[0] = -2 # check that the modified attributes of the first instance are not affected by the second instance self.assertEqual(image_processor_1.size, modified_copied_size_1) self.assertEqual(image_processor_1.image_mean, modified_copied_image_mean_1) @slow @require_torch_accelerator @require_vision @pytest.mark.torch_compile_test def test_can_compile_fast_image_processor(self): if self.fast_image_processing_class is None: self.skipTest("Skipping compilation test as fast image processor is not defined") if version.parse(torch.__version__) < version.parse("2.3"): self.skipTest(reason="This test requires torch >= 2.3 to run.") torch.compiler.reset() input_image = torch.randint(0, 255, (3, 224, 224), dtype=torch.uint8) image_processor = self.fast_image_processing_class(**self.image_processor_dict) output_eager = image_processor(input_image, device=torch_device, return_tensors="pt") image_processor = torch.compile(image_processor, mode="reduce-overhead") output_compiled = image_processor(input_image, device=torch_device, return_tensors="pt") self._assert_slow_fast_tensors_equivalence( output_eager.pixel_values, output_compiled.pixel_values, atol=1e-4, rtol=1e-4, mean_atol=1e-5 ) class AnnotationFormatTestMixin: # this mixin adds a test to assert that usages of the # to-be-deprecated `AnnotionFormat` continue to be # supported for the time being def test_processor_can_use_legacy_annotation_format(self): image_processor_dict = self.image_processor_tester.prepare_image_processor_dict() fixtures_path = pathlib.Path(__file__).parent / "fixtures" / "tests_samples" / "COCO" with open(fixtures_path / "coco_annotations.txt") as f: detection_target = json.loads(f.read()) detection_annotations = {"image_id": 39769, "annotations": detection_target} detection_params = { "images": Image.open(fixtures_path / "000000039769.png"), "annotations": detection_annotations, "return_tensors": "pt", } with open(fixtures_path / "coco_panoptic_annotations.txt") as f: panoptic_target = json.loads(f.read()) panoptic_annotations = {"file_name": "000000039769.png", "image_id": 39769, "segments_info": panoptic_target} masks_path = pathlib.Path(fixtures_path / "coco_panoptic") panoptic_params = { "images": Image.open(fixtures_path / "000000039769.png"), "annotations": panoptic_annotations, "return_tensors": "pt", "masks_path": masks_path, } test_cases = [ ("coco_detection", detection_params), ("coco_panoptic", panoptic_params), (AnnotionFormat.COCO_DETECTION, detection_params), (AnnotionFormat.COCO_PANOPTIC, panoptic_params), (AnnotationFormat.COCO_DETECTION, detection_params), (AnnotationFormat.COCO_PANOPTIC, panoptic_params), ] def _compare(a, b) -> None: if isinstance(a, (dict, BatchFeature)): self.assertEqual(a.keys(), b.keys()) for k, v in a.items(): _compare(v, b[k]) elif isinstance(a, list): self.assertEqual(len(a), len(b)) for idx in range(len(a)): _compare(a[idx], b[idx]) elif isinstance(a, torch.Tensor): torch.testing.assert_close(a, b, rtol=1e-3, atol=1e-3) elif isinstance(a, str): self.assertEqual(a, b) for annotation_format, params in test_cases: with self.subTest(annotation_format): image_processor_params = {**image_processor_dict, **{"format": annotation_format}} image_processor_first = self.image_processing_class(**image_processor_params) with tempfile.TemporaryDirectory() as tmpdirname: image_processor_first.save_pretrained(tmpdirname) image_processor_second = self.image_processing_class.from_pretrained(tmpdirname) # check the 'format' key exists and that the dicts of the # first and second processors are equal self.assertIn("format", image_processor_first.to_dict().keys()) self.assertEqual(image_processor_second.to_dict(), image_processor_first.to_dict()) # perform encoding using both processors and compare # the resulting BatchFeatures first_encoding = image_processor_first(**params) second_encoding = image_processor_second(**params) _compare(first_encoding, second_encoding)
transformers/tests/test_image_processing_common.py/0
{ "file_path": "transformers/tests/test_image_processing_common.py", "repo_id": "transformers", "token_count": 14175 }
612
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import shutil import tempfile import unittest from unittest.mock import patch from transformers import ( DefaultFlowCallback, EarlyStoppingCallback, IntervalStrategy, PrinterCallback, ProgressCallback, Trainer, TrainerCallback, TrainerState, TrainingArguments, is_torch_available, ) from transformers.testing_utils import require_torch from transformers.trainer_callback import ExportableState if is_torch_available(): from transformers.trainer import DEFAULT_CALLBACKS, TRAINER_STATE_NAME from .test_trainer import RegressionDataset, RegressionModelConfig, RegressionPreTrainedModel class MyTestExportableCallback(TrainerCallback, ExportableState): def __init__(self, my_test_state="test"): self.my_test_state = my_test_state def state(self): return { "args": { "my_test_state": self.my_test_state, }, } class MyTestTrainerCallback(TrainerCallback): "A callback that registers the events that goes through." def __init__(self, my_test_state="test"): self.events = [] self.my_test_state = my_test_state def on_init_end(self, args, state, control, **kwargs): self.events.append("on_init_end") def on_train_begin(self, args, state, control, **kwargs): self.events.append("on_train_begin") def on_train_end(self, args, state, control, **kwargs): self.events.append("on_train_end") def on_epoch_begin(self, args, state, control, **kwargs): self.events.append("on_epoch_begin") def on_epoch_end(self, args, state, control, **kwargs): self.events.append("on_epoch_end") def on_step_begin(self, args, state, control, **kwargs): self.events.append("on_step_begin") def on_pre_optimizer_step(self, args, state, control, **kwargs): self.events.append("on_pre_optimizer_step") def on_optimizer_step(self, args, state, control, **kwargs): self.events.append("on_optimizer_step") def on_step_end(self, args, state, control, **kwargs): self.events.append("on_step_end") def on_evaluate(self, args, state, control, **kwargs): self.events.append("on_evaluate") def on_predict(self, args, state, control, **kwargs): self.events.append("on_predict") def on_save(self, args, state, control, **kwargs): self.events.append("on_save") def on_log(self, args, state, control, **kwargs): self.events.append("on_log") def on_prediction_step(self, args, state, control, **kwargs): self.events.append("on_prediction_step") @require_torch class TrainerCallbackTest(unittest.TestCase): def setUp(self): self.output_dir = tempfile.mkdtemp() def tearDown(self): shutil.rmtree(self.output_dir) def get_trainer(self, a=0, b=0, train_len=64, eval_len=64, callbacks=None, disable_tqdm=False, **kwargs): # disable_tqdm in TrainingArguments has a flaky default since it depends on the level of logging. We make sure # its set to False since the tests later on depend on its value. train_dataset = RegressionDataset(length=train_len) eval_dataset = RegressionDataset(length=eval_len) config = RegressionModelConfig(a=a, b=b) model = RegressionPreTrainedModel(config) args = TrainingArguments(self.output_dir, disable_tqdm=disable_tqdm, report_to=[], **kwargs) return Trainer( model, args, train_dataset=train_dataset, eval_dataset=eval_dataset, callbacks=callbacks, ) def check_callbacks_equality(self, cbs1, cbs2): self.assertEqual(len(cbs1), len(cbs2)) # Order doesn't matter cbs1 = sorted(cbs1, key=lambda cb: cb.__name__ if isinstance(cb, type) else cb.__class__.__name__) cbs2 = sorted(cbs2, key=lambda cb: cb.__name__ if isinstance(cb, type) else cb.__class__.__name__) for cb1, cb2 in zip(cbs1, cbs2): if isinstance(cb1, type) and isinstance(cb2, type): self.assertEqual(cb1, cb2) elif isinstance(cb1, type) and not isinstance(cb2, type): self.assertEqual(cb1, cb2.__class__) elif not isinstance(cb1, type) and isinstance(cb2, type): self.assertEqual(cb1.__class__, cb2) else: self.assertEqual(cb1, cb2) def get_expected_events(self, trainer): expected_events = ["on_init_end", "on_train_begin"] step = 0 train_dl_len = len(trainer.get_eval_dataloader()) evaluation_events = ["on_prediction_step"] * len(trainer.get_eval_dataloader()) + ["on_log", "on_evaluate"] for _ in range(trainer.state.num_train_epochs): expected_events.append("on_epoch_begin") for _ in range(train_dl_len): step += 1 expected_events += ["on_step_begin", "on_pre_optimizer_step", "on_optimizer_step", "on_step_end"] if step % trainer.args.logging_steps == 0: expected_events.append("on_log") if trainer.args.eval_strategy == IntervalStrategy.STEPS and step % trainer.args.eval_steps == 0: expected_events += evaluation_events.copy() if step % trainer.args.save_steps == 0 or step == trainer.state.max_steps: expected_events.append("on_save") expected_events.append("on_epoch_end") if trainer.args.eval_strategy == IntervalStrategy.EPOCH: expected_events += evaluation_events.copy() expected_events += ["on_log", "on_train_end"] return expected_events def test_init_callback(self): trainer = self.get_trainer() expected_callbacks = DEFAULT_CALLBACKS.copy() + [ProgressCallback] self.check_callbacks_equality(trainer.callback_handler.callbacks, expected_callbacks) # Callbacks passed at init are added to the default callbacks trainer = self.get_trainer(callbacks=[MyTestTrainerCallback]) expected_callbacks.append(MyTestTrainerCallback) self.check_callbacks_equality(trainer.callback_handler.callbacks, expected_callbacks) # TrainingArguments.disable_tqdm controls if use ProgressCallback or PrinterCallback trainer = self.get_trainer(disable_tqdm=True) expected_callbacks = DEFAULT_CALLBACKS.copy() + [PrinterCallback] self.check_callbacks_equality(trainer.callback_handler.callbacks, expected_callbacks) def test_add_remove_callback(self): expected_callbacks = DEFAULT_CALLBACKS.copy() + [ProgressCallback] trainer = self.get_trainer() # We can add, pop, or remove by class name trainer.remove_callback(DefaultFlowCallback) expected_callbacks.remove(DefaultFlowCallback) self.check_callbacks_equality(trainer.callback_handler.callbacks, expected_callbacks) trainer = self.get_trainer() cb = trainer.pop_callback(DefaultFlowCallback) self.assertEqual(cb.__class__, DefaultFlowCallback) self.check_callbacks_equality(trainer.callback_handler.callbacks, expected_callbacks) trainer.add_callback(DefaultFlowCallback) expected_callbacks.insert(0, DefaultFlowCallback) self.check_callbacks_equality(trainer.callback_handler.callbacks, expected_callbacks) # We can also add, pop, or remove by instance trainer = self.get_trainer() cb = trainer.callback_handler.callbacks[0] trainer.remove_callback(cb) expected_callbacks.remove(DefaultFlowCallback) self.check_callbacks_equality(trainer.callback_handler.callbacks, expected_callbacks) trainer = self.get_trainer() cb1 = trainer.callback_handler.callbacks[0] cb2 = trainer.pop_callback(cb1) self.assertEqual(cb1, cb2) self.check_callbacks_equality(trainer.callback_handler.callbacks, expected_callbacks) trainer.add_callback(cb1) expected_callbacks.insert(0, DefaultFlowCallback) self.check_callbacks_equality(trainer.callback_handler.callbacks, expected_callbacks) def test_event_flow(self): import warnings # XXX: for now ignore scatter_gather warnings in this test since it's not relevant to what's being tested with warnings.catch_warnings(): warnings.simplefilter(action="ignore", category=UserWarning) trainer = self.get_trainer(callbacks=[MyTestTrainerCallback]) trainer.train() events = trainer.callback_handler.callbacks[-2].events self.assertEqual(events, self.get_expected_events(trainer)) # Independent log/save/eval trainer = self.get_trainer(callbacks=[MyTestTrainerCallback], logging_steps=5) trainer.train() events = trainer.callback_handler.callbacks[-2].events self.assertEqual(events, self.get_expected_events(trainer)) trainer = self.get_trainer(callbacks=[MyTestTrainerCallback], save_steps=5) trainer.train() events = trainer.callback_handler.callbacks[-2].events self.assertEqual(events, self.get_expected_events(trainer)) trainer = self.get_trainer(callbacks=[MyTestTrainerCallback], eval_steps=5, eval_strategy="steps") trainer.train() events = trainer.callback_handler.callbacks[-2].events self.assertEqual(events, self.get_expected_events(trainer)) trainer = self.get_trainer(callbacks=[MyTestTrainerCallback], eval_strategy="epoch") trainer.train() events = trainer.callback_handler.callbacks[-2].events self.assertEqual(events, self.get_expected_events(trainer)) # A bit of everything trainer = self.get_trainer( callbacks=[MyTestTrainerCallback], logging_steps=3, save_steps=10, eval_steps=5, eval_strategy="steps", ) trainer.train() events = trainer.callback_handler.callbacks[-2].events self.assertEqual(events, self.get_expected_events(trainer)) # warning should be emitted for duplicated callbacks with patch("transformers.trainer_callback.logger.warning") as warn_mock: trainer = self.get_trainer( callbacks=[MyTestTrainerCallback, MyTestTrainerCallback], ) assert str(MyTestTrainerCallback) in warn_mock.call_args[0][0] def test_stateful_callbacks(self): # Use something with non-defaults cb = EarlyStoppingCallback(early_stopping_patience=5, early_stopping_threshold=0.2) trainer = self.get_trainer( callbacks=[cb], load_best_model_at_end=True, save_strategy="steps", eval_strategy="steps", save_steps=2, eval_steps=2, max_steps=2, ) trainer.train() # Create a new trainer with defaults trainer = self.get_trainer( callbacks=[EarlyStoppingCallback()], load_best_model_at_end=True, save_strategy="steps", eval_strategy="steps", save_steps=2, eval_steps=2, max_steps=2, restore_callback_states_from_checkpoint=True, ) # Load it back in and verify values checkpoint = os.path.join(self.output_dir, "checkpoint-2") trainer.train(resume_from_checkpoint=checkpoint) cb = [ callback for callback in trainer.callback_handler.callbacks if isinstance(callback, EarlyStoppingCallback) ][0] assert cb.early_stopping_patience == 5 assert cb.early_stopping_threshold == 0.2 def test_stateful_mixed_callbacks(self): # Use two callbacks, one stateful one not # Use something with non-defaults cbs = [ MyTestTrainerCallback(my_test_state="another value"), EarlyStoppingCallback(early_stopping_patience=5, early_stopping_threshold=0.2), ] trainer = self.get_trainer( callbacks=cbs, load_best_model_at_end=True, save_strategy="steps", eval_strategy="steps", save_steps=2, eval_steps=2, max_steps=2, ) trainer.train() # Create a new trainer with defaults trainer = self.get_trainer( callbacks=[EarlyStoppingCallback(), MyTestTrainerCallback()], load_best_model_at_end=True, save_strategy="steps", eval_strategy="steps", save_steps=2, eval_steps=2, max_steps=2, restore_callback_states_from_checkpoint=True, ) # Load it back in and verify values checkpoint = os.path.join(self.output_dir, "checkpoint-2") trainer.train(resume_from_checkpoint=checkpoint) cbs = [ callback for callback in trainer.callback_handler.callbacks if isinstance(callback, (EarlyStoppingCallback, MyTestTrainerCallback)) ] assert len(cbs) == 2 my_test, early_stopping = cbs assert early_stopping.early_stopping_patience == 5 assert early_stopping.early_stopping_threshold == 0.2 assert my_test.my_test_state == "test" def test_stateful_duplicate_callbacks(self): # Use something with non-defaults cbs = [MyTestExportableCallback("first"), MyTestExportableCallback("second")] trainer = self.get_trainer( callbacks=cbs, load_best_model_at_end=True, save_strategy="steps", eval_strategy="steps", save_steps=2, eval_steps=2, max_steps=2, ) trainer.train() # Create a new trainer with defaults trainer = self.get_trainer( callbacks=[MyTestExportableCallback(), MyTestExportableCallback()], load_best_model_at_end=True, save_strategy="steps", eval_strategy="steps", save_steps=2, eval_steps=2, max_steps=2, restore_callback_states_from_checkpoint=True, ) # Load it back in and verify values checkpoint = os.path.join(self.output_dir, "checkpoint-2") trainer.train(resume_from_checkpoint=checkpoint) cbs = [ callback for callback in trainer.callback_handler.callbacks if isinstance(callback, MyTestExportableCallback) ] assert len(cbs) == 2 assert cbs[0].my_test_state == "first" assert cbs[1].my_test_state == "second" def test_missing_stateful_callback(self): cb = EarlyStoppingCallback() trainer = self.get_trainer( callbacks=[cb], load_best_model_at_end=True, save_strategy="steps", eval_strategy="steps", save_steps=2, eval_steps=2, max_steps=2, ) trainer.train() # Create a new trainer with defaults trainer = self.get_trainer( save_strategy="steps", eval_strategy="steps", save_steps=2, eval_steps=2, max_steps=2, restore_callback_states_from_checkpoint=True, ) # Load it back in and verify values checkpoint = os.path.join(self.output_dir, "checkpoint-2") # warning should be emitted for not-present callbacks with patch("transformers.trainer.logger.warning") as warn_mock: trainer.train(resume_from_checkpoint=checkpoint) assert "EarlyStoppingCallback" in warn_mock.call_args[0][0] def test_stateful_control(self): trainer = self.get_trainer( max_steps=2, save_strategy="steps", save_steps=2, ) trainer.train() # Load it back in and verify values trainer = self.get_trainer(max_steps=2, restore_callback_states_from_checkpoint=True) checkpoint = os.path.join(self.output_dir, "checkpoint-2") trainer.state = TrainerState.load_from_json(os.path.join(checkpoint, TRAINER_STATE_NAME)) trainer._load_callback_state() assert trainer.control.should_training_stop def test_no_duplicate_save_on_epoch_save_strategy(self): times_saved = 0 class OnEndCallback(TrainerCallback): def on_step_end(self, args: TrainingArguments, state: TrainerState, control, **kwargs): nonlocal times_saved if control.should_save: times_saved += 1 def on_epoch_end(self, args: TrainingArguments, state: TrainerState, control, **kwargs): nonlocal times_saved if control.should_save: times_saved += 1 trainer = self.get_trainer(max_steps=2, save_strategy="epoch", callbacks=[OnEndCallback]) trainer.train() assert times_saved == 1
transformers/tests/trainer/test_trainer_callback.py/0
{ "file_path": "transformers/tests/trainer/test_trainer_callback.py", "repo_id": "transformers", "token_count": 7812 }
613
# Copyright 2025 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import builtins import io import re import unittest from transformers.testing_utils import require_read_token, require_torch from transformers.utils.attention_visualizer import AttentionMaskVisualizer ANSI_RE = re.compile(r"\x1b\[[0-9;]*m") def _normalize(s: str) -> str: # drop ANSI (colors may be disabled on CI), normalize line endings, # and strip trailing spaces without touching alignment inside lines s = ANSI_RE.sub("", s) s = s.replace("\r\n", "\n").replace("\r", "\n") return "\n".join(line.rstrip() for line in s.split("\n")).strip() @require_torch class AttentionMaskVisualizerTester(unittest.TestCase): """Test suite for AttentionMaskVisualizer""" @require_read_token def test_paligemma_multimodal_visualization(self): """Test AttentionMaskVisualizer with PaliGemma multimodal model""" model_name = "hf-internal-testing/namespace_google_repo_name_paligemma-3b-pt-224" input_text = "<img> What is in this image?" buf = io.StringIO() orig_print = builtins.print def _print(*args, **kwargs): kwargs.setdefault("file", buf) orig_print(*args, **kwargs) try: builtins.print = _print visualizer = AttentionMaskVisualizer(model_name) visualizer(input_text) finally: builtins.print = orig_print output = buf.getvalue() expected_output = """ ########################################################################################################################################################################################################################################## ## Attention visualization for \033[1mpaligemma:hf-internal-testing/namespace_google_repo_name_paligemma-3b-pt-224\033[0m PaliGemmaModel ## ########################################################################################################################################################################################################################################## \033[92m■\033[0m: i == j (diagonal) \033[93m■\033[0m: token_type_ids Attention Matrix \033[93m'<image>'\033[0m: 0 \033[93m■\033[0m ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ | \033[93m'<image>'\033[0m: 1 \033[93m■\033[0m \033[93m■\033[0m \033[93m■\033[0m \033[93m■\033[0m \033[93m■\033[0m ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ | \033[93m'<image>'\033[0m: 2 \033[93m■\033[0m \033[93m■\033[0m \033[93m■\033[0m \033[93m■\033[0m \033[93m■\033[0m ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ | \033[93m'<image>'\033[0m: 3 \033[93m■\033[0m \033[93m■\033[0m \033[93m■\033[0m \033[93m■\033[0m \033[93m■\033[0m ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ | \033[93m'<image>'\033[0m: 4 \033[93m■\033[0m \033[93m■\033[0m \033[93m■\033[0m \033[93m■\033[0m \033[93m■\033[0m ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ | '<bos>' : 5 ■ ■ ■ ■ ■ \033[92m■\033[0m ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ | '▁What' : 6 ■ ■ ■ ■ ■ ■ \033[92m■\033[0m ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ | '▁is' : 7 ■ ■ ■ ■ ■ ■ ■ \033[92m■\033[0m ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ | '▁in' : 8 ■ ■ ■ ■ ■ ■ ■ ■ \033[92m■\033[0m ⬚ ⬚ ⬚ ⬚ ⬚ | '▁this' : 9 ■ ■ ■ ■ ■ ■ ■ ■ ■ \033[92m■\033[0m ⬚ ⬚ ⬚ ⬚ | '▁image' : 10 ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ \033[92m■\033[0m ⬚ ⬚ ⬚ | '?' : 11 ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ \033[92m■\033[0m ⬚ ⬚ | '\\n' : 12 ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ \033[92m■\033[0m ⬚ | '<eos>' : 13 ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ \033[92m■\033[0m | ########################################################################################################################################################################################################################################## """ # noqa self.assertEqual(_normalize(output), _normalize(expected_output)) @require_read_token def test_llama_text_only_visualization(self): """Test AttentionMaskVisualizer with Llama text-only model""" model_name = "hf-internal-testing/namespace_meta-llama_repo_name_Llama-2-7b-hf" input_text = "Plants create energy through a process known as" buf = io.StringIO() orig_print = builtins.print def _print(*args, **kwargs): kwargs.setdefault("file", buf) orig_print(*args, **kwargs) try: builtins.print = _print visualizer = AttentionMaskVisualizer(model_name) visualizer(input_text) finally: builtins.print = orig_print output = buf.getvalue() expected_output = """ ########################################################################################################################################################################################################## ## Attention visualization for \033[1mllama:hf-internal-testing/namespace_meta-llama_repo_name_Llama-2-7b-hf\033[0m LlamaModel ## ########################################################################################################################################################################################################## \033[92m■\033[0m: i == j (diagonal) \033[93m■\033[0m: token_type_ids Attention Matrix '▁Pl' : 0 \033[92m■\033[0m ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ | 'ants' : 1 ■ \033[92m■\033[0m ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ | '▁create' : 2 ■ ■ \033[92m■\033[0m ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ | '▁energy' : 3 ■ ■ ■ \033[92m■\033[0m ⬚ ⬚ ⬚ ⬚ ⬚ | '▁through': 4 ■ ■ ■ ■ \033[92m■\033[0m ⬚ ⬚ ⬚ ⬚ | '▁a' : 5 ■ ■ ■ ■ ■ \033[92m■\033[0m ⬚ ⬚ ⬚ | '▁process': 6 ■ ■ ■ ■ ■ ■ \033[92m■\033[0m ⬚ ⬚ | '▁known' : 7 ■ ■ ■ ■ ■ ■ ■ \033[92m■\033[0m ⬚ | '▁as' : 8 ■ ■ ■ ■ ■ ■ ■ ■ \033[92m■\033[0m | ########################################################################################################################################################################################################## """ # noqa self.assertEqual(_normalize(output), _normalize(expected_output))
transformers/tests/utils/test_attention_visualizer.py/0
{ "file_path": "transformers/tests/utils/test_attention_visualizer.py", "repo_id": "transformers", "token_count": 2995 }
614
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import json import os import sys import tempfile import unittest from argparse import Namespace from dataclasses import dataclass, field from enum import Enum from pathlib import Path from typing import Literal, Optional, Union, get_args, get_origin from unittest.mock import patch import yaml from transformers import HfArgumentParser, TrainingArguments from transformers.hf_argparser import make_choice_type_function, string_to_bool from transformers.testing_utils import require_torch # Since Python 3.10, we can use the builtin `|` operator for Union types # See PEP 604: https://peps.python.org/pep-0604 is_python_no_less_than_3_10 = sys.version_info >= (3, 10) def list_field(default=None, metadata=None): return field(default_factory=lambda: default, metadata=metadata) @dataclass class BasicExample: foo: int bar: float baz: str flag: bool @dataclass class WithDefaultExample: foo: int = 42 baz: str = field(default="toto", metadata={"help": "help message"}) @dataclass class WithDefaultBoolExample: foo: bool = False baz: bool = True opt: Optional[bool] = None class BasicEnum(Enum): titi = "titi" toto = "toto" class MixedTypeEnum(Enum): titi = "titi" toto = "toto" fourtytwo = 42 @dataclass class EnumExample: foo: BasicEnum = "toto" def __post_init__(self): self.foo = BasicEnum(self.foo) @dataclass class MixedTypeEnumExample: foo: MixedTypeEnum = "toto" def __post_init__(self): self.foo = MixedTypeEnum(self.foo) @dataclass class OptionalExample: foo: Optional[int] = None bar: Optional[float] = field(default=None, metadata={"help": "help message"}) baz: Optional[str] = None ces: Optional[list[str]] = list_field(default=[]) des: Optional[list[int]] = list_field(default=[]) @dataclass class ListExample: foo_int: list[int] = list_field(default=[]) bar_int: list[int] = list_field(default=[1, 2, 3]) foo_str: list[str] = list_field(default=["Hallo", "Bonjour", "Hello"]) foo_float: list[float] = list_field(default=[0.1, 0.2, 0.3]) @dataclass class RequiredExample: required_list: list[int] = field() required_str: str = field() required_enum: BasicEnum = field() def __post_init__(self): self.required_enum = BasicEnum(self.required_enum) @dataclass class StringLiteralAnnotationExample: foo: int required_enum: "BasicEnum" = field() opt: "Optional[bool]" = None baz: "str" = field(default="toto", metadata={"help": "help message"}) foo_str: "list[str]" = list_field(default=["Hallo", "Bonjour", "Hello"]) if is_python_no_less_than_3_10: @dataclass class WithDefaultBoolExamplePep604: foo: bool = False baz: bool = True opt: bool | None = None @dataclass class OptionalExamplePep604: foo: int | None = None bar: float | None = field(default=None, metadata={"help": "help message"}) baz: str | None = None ces: list[str] | None = list_field(default=[]) des: list[int] | None = list_field(default=[]) class HfArgumentParserTest(unittest.TestCase): def argparsersEqual(self, a: argparse.ArgumentParser, b: argparse.ArgumentParser): """ Small helper to check pseudo-equality of parsed arguments on `ArgumentParser` instances. """ self.assertEqual(len(a._actions), len(b._actions)) for x, y in zip(a._actions, b._actions): xx = {k: v for k, v in vars(x).items() if k != "container"} yy = {k: v for k, v in vars(y).items() if k != "container"} # Choices with mixed type have custom function as "type" # So we need to compare results directly for equality if xx.get("choices") and yy.get("choices"): for expected_choice in yy["choices"] + xx["choices"]: self.assertEqual(xx["type"](expected_choice), yy["type"](expected_choice)) del xx["type"], yy["type"] self.assertEqual(xx, yy) def test_00_basic(self): parser = HfArgumentParser(BasicExample) expected = argparse.ArgumentParser() expected.add_argument("--foo", type=int, required=True) expected.add_argument("--bar", type=float, required=True) expected.add_argument("--baz", type=str, required=True) expected.add_argument("--flag", type=string_to_bool, default=False, const=True, nargs="?") self.argparsersEqual(parser, expected) args = ["--foo", "1", "--baz", "quux", "--bar", "0.5"] (example,) = parser.parse_args_into_dataclasses(args, look_for_args_file=False) self.assertFalse(example.flag) def test_01_with_default(self): parser = HfArgumentParser(WithDefaultExample) expected = argparse.ArgumentParser() expected.add_argument("--foo", default=42, type=int) expected.add_argument("--baz", default="toto", type=str, help="help message") self.argparsersEqual(parser, expected) def test_02_with_default_bool(self): expected = argparse.ArgumentParser() expected.add_argument("--foo", type=string_to_bool, default=False, const=True, nargs="?") expected.add_argument("--baz", type=string_to_bool, default=True, const=True, nargs="?") # A boolean no_* argument always has to come after its "default: True" regular counter-part # and its default must be set to False expected.add_argument("--no_baz", "--no-baz", action="store_false", default=False, dest="baz") expected.add_argument("--opt", type=string_to_bool, default=None) dataclass_types = [WithDefaultBoolExample] if is_python_no_less_than_3_10: dataclass_types.append(WithDefaultBoolExamplePep604) for dataclass_type in dataclass_types: parser = HfArgumentParser(dataclass_type) self.argparsersEqual(parser, expected) args = parser.parse_args([]) self.assertEqual(args, Namespace(foo=False, baz=True, opt=None)) args = parser.parse_args(["--foo", "--no_baz"]) self.assertEqual(args, Namespace(foo=True, baz=False, opt=None)) args = parser.parse_args(["--foo", "--no-baz"]) self.assertEqual(args, Namespace(foo=True, baz=False, opt=None)) args = parser.parse_args(["--foo", "--baz"]) self.assertEqual(args, Namespace(foo=True, baz=True, opt=None)) args = parser.parse_args(["--foo", "True", "--baz", "True", "--opt", "True"]) self.assertEqual(args, Namespace(foo=True, baz=True, opt=True)) args = parser.parse_args(["--foo", "False", "--baz", "False", "--opt", "False"]) self.assertEqual(args, Namespace(foo=False, baz=False, opt=False)) def test_03_with_enum(self): parser = HfArgumentParser(MixedTypeEnumExample) expected = argparse.ArgumentParser() expected.add_argument( "--foo", default="toto", choices=["titi", "toto", 42], type=make_choice_type_function(["titi", "toto", 42]), ) self.argparsersEqual(parser, expected) args = parser.parse_args([]) self.assertEqual(args.foo, "toto") enum_ex = parser.parse_args_into_dataclasses([])[0] self.assertEqual(enum_ex.foo, MixedTypeEnum.toto) args = parser.parse_args(["--foo", "titi"]) self.assertEqual(args.foo, "titi") enum_ex = parser.parse_args_into_dataclasses(["--foo", "titi"])[0] self.assertEqual(enum_ex.foo, MixedTypeEnum.titi) args = parser.parse_args(["--foo", "42"]) self.assertEqual(args.foo, 42) enum_ex = parser.parse_args_into_dataclasses(["--foo", "42"])[0] self.assertEqual(enum_ex.foo, MixedTypeEnum.fourtytwo) def test_04_with_literal(self): @dataclass class LiteralExample: foo: Literal["titi", "toto", 42] = "toto" parser = HfArgumentParser(LiteralExample) expected = argparse.ArgumentParser() expected.add_argument( "--foo", default="toto", choices=("titi", "toto", 42), type=make_choice_type_function(["titi", "toto", 42]), ) self.argparsersEqual(parser, expected) args = parser.parse_args([]) self.assertEqual(args.foo, "toto") args = parser.parse_args(["--foo", "titi"]) self.assertEqual(args.foo, "titi") args = parser.parse_args(["--foo", "42"]) self.assertEqual(args.foo, 42) def test_05_with_list(self): parser = HfArgumentParser(ListExample) expected = argparse.ArgumentParser() expected.add_argument("--foo_int", "--foo-int", nargs="+", default=[], type=int) expected.add_argument("--bar_int", "--bar-int", nargs="+", default=[1, 2, 3], type=int) expected.add_argument("--foo_str", "--foo-str", nargs="+", default=["Hallo", "Bonjour", "Hello"], type=str) expected.add_argument("--foo_float", "--foo-float", nargs="+", default=[0.1, 0.2, 0.3], type=float) self.argparsersEqual(parser, expected) args = parser.parse_args([]) self.assertEqual( args, Namespace(foo_int=[], bar_int=[1, 2, 3], foo_str=["Hallo", "Bonjour", "Hello"], foo_float=[0.1, 0.2, 0.3]), ) args = parser.parse_args("--foo_int 1 --bar_int 2 3 --foo_str a b c --foo_float 0.1 0.7".split()) self.assertEqual(args, Namespace(foo_int=[1], bar_int=[2, 3], foo_str=["a", "b", "c"], foo_float=[0.1, 0.7])) args = parser.parse_args("--foo-int 1 --bar-int 2 3 --foo-str a b c --foo-float 0.1 0.7".split()) self.assertEqual(args, Namespace(foo_int=[1], bar_int=[2, 3], foo_str=["a", "b", "c"], foo_float=[0.1, 0.7])) def test_06_with_optional(self): expected = argparse.ArgumentParser() expected.add_argument("--foo", default=None, type=int) expected.add_argument("--bar", default=None, type=float, help="help message") expected.add_argument("--baz", default=None, type=str) expected.add_argument("--ces", nargs="+", default=[], type=str) expected.add_argument("--des", nargs="+", default=[], type=int) dataclass_types = [OptionalExample] if is_python_no_less_than_3_10: dataclass_types.append(OptionalExamplePep604) for dataclass_type in dataclass_types: parser = HfArgumentParser(dataclass_type) self.argparsersEqual(parser, expected) args = parser.parse_args([]) self.assertEqual(args, Namespace(foo=None, bar=None, baz=None, ces=[], des=[])) args = parser.parse_args("--foo 12 --bar 3.14 --baz 42 --ces a b c --des 1 2 3".split()) self.assertEqual(args, Namespace(foo=12, bar=3.14, baz="42", ces=["a", "b", "c"], des=[1, 2, 3])) def test_07_with_required(self): parser = HfArgumentParser(RequiredExample) expected = argparse.ArgumentParser() expected.add_argument("--required_list", "--required-list", nargs="+", type=int, required=True) expected.add_argument("--required_str", "--required-str", type=str, required=True) expected.add_argument( "--required_enum", "--required-enum", type=make_choice_type_function(["titi", "toto"]), choices=["titi", "toto"], required=True, ) self.argparsersEqual(parser, expected) def test_08_with_string_literal_annotation(self): parser = HfArgumentParser(StringLiteralAnnotationExample) expected = argparse.ArgumentParser() expected.add_argument("--foo", type=int, required=True) expected.add_argument( "--required_enum", "--required-enum", type=make_choice_type_function(["titi", "toto"]), choices=["titi", "toto"], required=True, ) expected.add_argument("--opt", type=string_to_bool, default=None) expected.add_argument("--baz", default="toto", type=str, help="help message") expected.add_argument("--foo_str", "--foo-str", nargs="+", default=["Hallo", "Bonjour", "Hello"], type=str) self.argparsersEqual(parser, expected) def test_09_parse_dict(self): parser = HfArgumentParser(BasicExample) args_dict = { "foo": 12, "bar": 3.14, "baz": "42", "flag": True, } parsed_args = parser.parse_dict(args_dict)[0] args = BasicExample(**args_dict) self.assertEqual(parsed_args, args) def test_10_parse_dict_extra_key(self): parser = HfArgumentParser(BasicExample) args_dict = { "foo": 12, "bar": 3.14, "baz": "42", "flag": True, "extra": 42, } self.assertRaises(ValueError, parser.parse_dict, args_dict, allow_extra_keys=False) def test_11_parse_json(self): parser = HfArgumentParser(BasicExample) args_dict_for_json = { "foo": 12, "bar": 3.14, "baz": "42", "flag": True, } with tempfile.TemporaryDirectory() as tmp_dir: temp_local_path = os.path.join(tmp_dir, "temp_json") os.mkdir(temp_local_path) with open(temp_local_path + ".json", "w+") as f: json.dump(args_dict_for_json, f) parsed_args = parser.parse_json_file(Path(temp_local_path + ".json"))[0] args = BasicExample(**args_dict_for_json) self.assertEqual(parsed_args, args) def test_12_parse_yaml(self): parser = HfArgumentParser(BasicExample) args_dict_for_yaml = { "foo": 12, "bar": 3.14, "baz": "42", "flag": True, } with tempfile.TemporaryDirectory() as tmp_dir: temp_local_path = os.path.join(tmp_dir, "temp_yaml") os.mkdir(temp_local_path) with open(temp_local_path + ".yaml", "w+") as f: yaml.dump(args_dict_for_yaml, f) parsed_args = parser.parse_yaml_file(Path(temp_local_path + ".yaml"))[0] args = BasicExample(**args_dict_for_yaml) self.assertEqual(parsed_args, args) def test_13_valid_dict_annotation(self): """ Tests to make sure that `dict` based annotations are correctly made in the `TrainingArguments`. If this fails, a type annotation change is needed on a new input """ base_list = TrainingArguments._VALID_DICT_FIELDS.copy() args = TrainingArguments # First find any annotations that contain `dict` fields = args.__dataclass_fields__ raw_dict_fields = [] optional_dict_fields = [] for field in fields.values(): # First verify raw dict if field.type is dict: raw_dict_fields.append(field) # Next check for `Union` or `Optional` elif get_origin(field.type) == Union: if any(arg is dict for arg in get_args(field.type)): optional_dict_fields.append(field) # First check: anything in `raw_dict_fields` is very bad self.assertEqual( len(raw_dict_fields), 0, f"Found invalid raw `dict` types in the `TrainingArgument` typings, which are {raw_dict_fields}. " "This leads to issues with the CLI. Please turn this into `typing.Optional[dict,str]`", ) # Next check raw annotations for field in optional_dict_fields: args = get_args(field.type) # These should be returned as `dict`, `str`, ... # we only care about the first two self.assertIn( dict, args, f"Expected field `{field.name}` to have a type signature of at least `typing.Union[dict,str,...]` for CLI compatibility, but `dict` not found. Please fix this.", ) self.assertIn( str, args, f"Expected field `{field.name}` to have a type signature of at least `typing.Union[dict,str,...]` for CLI compatibility, but `str` not found. Please fix this.", ) # Second check: anything in `optional_dict_fields` is bad if it's not in `base_list` for field in optional_dict_fields: self.assertIn( field.name, base_list, f"Optional dict field `{field.name}` is not in the base list of valid fields. Please add it to `TrainingArguments._VALID_DICT_FIELDS`", ) @require_torch def test_14_valid_dict_input_parsing(self): with tempfile.TemporaryDirectory() as tmp_dir: args = TrainingArguments( output_dir=tmp_dir, accelerator_config='{"split_batches": "True", "gradient_accumulation_kwargs": {"num_steps": 2}}', ) self.assertEqual(args.accelerator_config.split_batches, True) self.assertEqual(args.accelerator_config.gradient_accumulation_kwargs["num_steps"], 2) def test_15_integration_training_args(self): parser = HfArgumentParser(TrainingArguments) self.assertIsNotNone(parser) @require_torch @patch("sys.argv", ["test.py", "--accelerator_config", '{"gradient_accumulation_kwargs": {"num_steps": 2}}']) def test_16_cli_input_parsing(self): parser = HfArgumentParser(TrainingArguments) training_args = parser.parse_args_into_dataclasses()[0] self.assertEqual(training_args.accelerator_config.gradient_accumulation_kwargs["num_steps"], 2)
transformers/tests/utils/test_hf_argparser.py/0
{ "file_path": "transformers/tests/utils/test_hf_argparser.py", "repo_id": "transformers", "token_count": 8139 }
615
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import importlib.metadata import sys from transformers.testing_utils import TestCasePlus from transformers.utils.versions import require_version, require_version_core numpy_ver = importlib.metadata.version("numpy") python_ver = ".".join([str(x) for x in sys.version_info[:3]]) class DependencyVersionCheckTest(TestCasePlus): def test_core(self): # lt + different version strings require_version_core("numpy<1000.4.5") require_version_core("numpy<1000.4") require_version_core("numpy<1000") # le require_version_core("numpy<=1000.4.5") require_version_core(f"numpy<={numpy_ver}") # eq require_version_core(f"numpy=={numpy_ver}") # ne require_version_core("numpy!=1000.4.5") # ge require_version_core("numpy>=1.0") require_version_core("numpy>=1.0.0") require_version_core(f"numpy>={numpy_ver}") # gt require_version_core("numpy>1.0.0") # mix require_version_core("numpy>1.0.0,<1000") # requirement w/o version require_version_core("numpy") # unmet requirements due to version conflict for req in ["numpy==1.0.0", "numpy>=1000.0.0", f"numpy<{numpy_ver}"]: try: require_version_core(req) except ImportError as e: self.assertIn(f"{req} is required", str(e)) self.assertIn("but found", str(e)) # unmet requirements due to missing module for req in ["numpipypie>1", "numpipypie2"]: try: require_version_core(req) except importlib.metadata.PackageNotFoundError as e: self.assertIn(f"The '{req}' distribution was not found and is required by this application", str(e)) self.assertIn("Try: `pip install transformers -U`", str(e)) # bogus requirements formats: # 1. whole thing for req in ["numpy??1.0.0", "numpy1.0.0"]: try: require_version_core(req) except ValueError as e: self.assertIn("requirement needs to be in the pip package format", str(e)) # 2. only operators for req in ["numpy=1.0.0", "numpy == 1.00", "numpy<>1.0.0", "numpy><1.00", "numpy>>1.0.0"]: try: require_version_core(req) except ValueError as e: self.assertIn("need one of ", str(e)) def test_python(self): # matching requirement require_version("python>=3.9.0") # not matching requirements for req in ["python>9.9.9", "python<3.0.0"]: try: require_version_core(req) except ImportError as e: self.assertIn(f"{req} is required", str(e)) self.assertIn(f"but found python=={python_ver}", str(e))
transformers/tests/utils/test_versions_utils.py/0
{ "file_path": "transformers/tests/utils/test_versions_utils.py", "repo_id": "transformers", "token_count": 1539 }
616
import argparse import difflib import glob import logging import multiprocessing import os import shutil import subprocess from functools import partial from io import StringIO from create_dependency_mapping import find_priority_list # Console for rich printing from modular_model_converter import convert_modular_file from rich.console import Console from rich.syntax import Syntax logging.basicConfig() logging.getLogger().setLevel(logging.ERROR) console = Console() BACKUP_EXT = ".modular_backup" def process_file( modular_file_path, generated_modeling_content, file_type="modeling_", show_diff=True, ): file_name_prefix = file_type.split("*")[0] file_name_suffix = file_type.split("*")[-1] if "*" in file_type else "" file_path = modular_file_path.replace("modular_", f"{file_name_prefix}_").replace(".py", f"{file_name_suffix}.py") # Read the actual modeling file with open(file_path, "r", encoding="utf-8") as modeling_file: content = modeling_file.read() output_buffer = StringIO(generated_modeling_content[file_type][0]) output_buffer.seek(0) output_content = output_buffer.read() diff = difflib.unified_diff( output_content.splitlines(), content.splitlines(), fromfile=f"{file_path}_generated", tofile=f"{file_path}", lineterm="", ) diff_list = list(diff) # Check for differences if diff_list: # first save the copy of the original file, to be able to restore it later if os.path.exists(file_path): shutil.copy(file_path, file_path + BACKUP_EXT) # we always save the generated content, to be able to update dependant files with open(file_path, "w", encoding="utf-8", newline="\n") as modeling_file: modeling_file.write(generated_modeling_content[file_type][0]) console.print(f"[bold blue]Overwritten {file_path} with the generated content.[/bold blue]") if show_diff: console.print(f"\n[bold red]Differences found between the generated code and {file_path}:[/bold red]\n") diff_text = "\n".join(diff_list) syntax = Syntax(diff_text, "diff", theme="ansi_dark", line_numbers=True) console.print(syntax) return 1 else: console.print(f"[bold green]No differences found for {file_path}.[/bold green]") return 0 def compare_files(modular_file_path, show_diff=True): # Generate the expected modeling content generated_modeling_content = convert_modular_file(modular_file_path) diff = 0 for file_type in generated_modeling_content: diff += process_file(modular_file_path, generated_modeling_content, file_type, show_diff) return diff def get_models_in_diff(): """ Finds all models that have been modified in the diff. Returns: A set containing the names of the models that have been modified (e.g. {'llama', 'whisper'}). """ fork_point_sha = subprocess.check_output("git merge-base main HEAD".split()).decode("utf-8") modified_files = ( subprocess.check_output(f"git diff --diff-filter=d --name-only {fork_point_sha}".split()) .decode("utf-8") .split() ) # Matches both modelling files and tests relevant_modified_files = [x for x in modified_files if "/models/" in x and x.endswith(".py")] model_names = set() for file_path in relevant_modified_files: model_name = file_path.split("/")[-2] model_names.add(model_name) return model_names def guaranteed_no_diff(modular_file_path, dependencies, models_in_diff): """ Returns whether it is guaranteed to have no differences between the modular file and the modeling file. Model is in the diff -> not guaranteed to have no differences Dependency is in the diff -> not guaranteed to have no differences Otherwise -> guaranteed to have no differences Args: modular_file_path: The path to the modular file. dependencies: A dictionary containing the dependencies of each modular file. models_in_diff: A set containing the names of the models that have been modified. Returns: A boolean indicating whether the model (code and tests) is guaranteed to have no differences. """ model_name = modular_file_path.rsplit("modular_", 1)[1].replace(".py", "") if model_name in models_in_diff: return False for dep in dependencies[modular_file_path]: # two possible patterns: `transformers.models.model_name.(...)` or `model_name.(...)` dependency_model_name = dep.split(".")[-2] if dependency_model_name in models_in_diff: return False return True if __name__ == "__main__": parser = argparse.ArgumentParser(description="Compare modular_xxx.py files with modeling_xxx.py files.") parser.add_argument( "--files", default=["all"], type=str, nargs="+", help="List of modular_xxx.py files to compare." ) parser.add_argument( "--fix_and_overwrite", action="store_true", help="Overwrite the modeling_xxx.py file if differences are found." ) parser.add_argument("--check_all", action="store_true", help="Check all files, not just the ones in the diff.") parser.add_argument( "--num_workers", default=-1, type=int, help="The number of workers to run. Default is -1, which means the number of CPU cores.", ) args = parser.parse_args() if args.files == ["all"]: args.files = glob.glob("src/transformers/models/**/modular_*.py", recursive=True) if args.num_workers == -1: args.num_workers = multiprocessing.cpu_count() # Assuming there is a topological sort on the dependency mapping: if the file being checked and its dependencies # are not in the diff, then there it is guaranteed to have no differences. If no models are in the diff, then this # script will do nothing. current_branch = subprocess.check_output(["git", "branch", "--show-current"], text=True).strip() if current_branch == "main": console.print( "[bold red]You are developing on the main branch. We cannot identify the list of changed files and will have to check all files. This may take a while.[/bold red]" ) models_in_diff = {file_path.split("/")[-2] for file_path in args.files} else: models_in_diff = get_models_in_diff() if not models_in_diff and not args.check_all: console.print( "[bold green]No models files or model tests in the diff, skipping modular checks[/bold green]" ) exit(0) skipped_models = set() non_matching_files = [] ordered_files, dependencies = find_priority_list(args.files) flat_ordered_files = [item for sublist in ordered_files for item in sublist] # ordered_files is a *sorted* list of lists of filepaths # - files from the first list do NOT depend on other files # - files in the second list depend on files from the first list # - files in the third list depend on files from the second and (optionally) the first list # - ... and so on # files (models) within the same list are *independent* of each other; # we start applying modular conversion to each list in parallel, starting from the first list console.print(f"[bold yellow]Number of dependency levels: {len(ordered_files)}[/bold yellow]") console.print(f"[bold yellow]Files per level: {tuple([len(x) for x in ordered_files])}[/bold yellow]") try: for dependency_level_files in ordered_files: # Filter files guaranteed no diff files_to_check = [] for file_path in dependency_level_files: if not args.check_all and guaranteed_no_diff(file_path, dependencies, models_in_diff): skipped_models.add(file_path.split("/")[-2]) # save model folder name else: files_to_check.append(file_path) if not files_to_check: continue # Process files with diff num_workers = min(args.num_workers, len(files_to_check)) with multiprocessing.Pool(num_workers) as p: is_changed_flags = p.map( partial(compare_files, show_diff=not args.fix_and_overwrite), files_to_check, ) # Collect changed files and their original paths for is_changed, file_path in zip(is_changed_flags, files_to_check): if is_changed: non_matching_files.append(file_path) # Update changed models, after each round of conversions # (save model folder name) models_in_diff.add(file_path.split("/")[-2]) finally: # Restore overwritten files by modular (if needed) backup_files = glob.glob("**/*" + BACKUP_EXT, recursive=True) for backup_file_path in backup_files: overwritten_path = backup_file_path.replace(BACKUP_EXT, "") if not args.fix_and_overwrite and os.path.exists(overwritten_path): shutil.copy(backup_file_path, overwritten_path) os.remove(backup_file_path) if non_matching_files and not args.fix_and_overwrite: diff_models = set(file_path.split("/")[-2] for file_path in non_matching_files) # noqa models_str = "\n - " + "\n - ".join(sorted(diff_models)) raise ValueError(f"Some diff and their modeling code did not match. Models in diff:{models_str}") if skipped_models: console.print( f"[bold green]Skipped {len(skipped_models)} models and their dependencies that are not in the diff: " f"{', '.join(sorted(skipped_models))}[/bold green]" )
transformers/utils/check_modular_conversion.py/0
{ "file_path": "transformers/utils/check_modular_conversion.py", "repo_id": "transformers", "token_count": 3803 }
617
import argparse import math import traceback import dateutil.parser as date_parser import requests def extract_time_from_single_job(job): """Extract time info from a single job in a GitHub Actions workflow run""" job_info = {} start = job["started_at"] end = job["completed_at"] start_datetime = date_parser.parse(start) end_datetime = date_parser.parse(end) duration_in_min = round((end_datetime - start_datetime).total_seconds() / 60.0) job_info["started_at"] = start job_info["completed_at"] = end job_info["duration"] = duration_in_min return job_info def get_job_time(workflow_run_id, token=None): """Extract time info for all jobs in a GitHub Actions workflow run""" headers = None if token is not None: headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"} url = f"https://api.github.com/repos/huggingface/transformers/actions/runs/{workflow_run_id}/jobs?per_page=100" result = requests.get(url, headers=headers).json() job_time = {} try: job_time.update({job["name"]: extract_time_from_single_job(job) for job in result["jobs"]}) pages_to_iterate_over = math.ceil((result["total_count"] - 100) / 100) for i in range(pages_to_iterate_over): result = requests.get(url + f"&page={i + 2}", headers=headers).json() job_time.update({job["name"]: extract_time_from_single_job(job) for job in result["jobs"]}) return job_time except Exception: print(f"Unknown error, could not fetch links:\n{traceback.format_exc()}") return {} if __name__ == "__main__": r""" Example: python get_github_job_time.py --workflow_run_id 2945609517 """ parser = argparse.ArgumentParser() # Required parameters parser.add_argument("--workflow_run_id", type=str, required=True, help="A GitHub Actions workflow run id.") args = parser.parse_args() job_time = get_job_time(args.workflow_run_id) job_time = dict(sorted(job_time.items(), key=lambda item: item[1]["duration"], reverse=True)) for k, v in job_time.items(): print(f"{k}: {v['duration']}")
transformers/utils/get_github_job_time.py/0
{ "file_path": "transformers/utils/get_github_job_time.py", "repo_id": "transformers", "token_count": 835 }
618
"""An internal script to process `new_failures_with_bad_commit.json` produced by `utils/check_bad_commit.py`. This is used by `.github/workflows/check_failed_model_tests.yml` to produce a slack report of the following form ``` <{url}|New failed tests> { "GH_ydshieh": { "vit": 1 } } ``` """ import json import os from collections import Counter from copy import deepcopy from get_previous_daily_ci import get_last_daily_ci_run from huggingface_hub import HfApi if __name__ == "__main__": api = HfApi() job_name = os.environ.get("JOB_NAME") with open("new_failures_with_bad_commit.json") as fp: data = json.load(fp) with open(f"ci_results_{job_name}/job_links.json") as fp: job_links = json.load(fp) # TODO: extend team_members = [ "ydshieh", "zucchini-nlp", "ArthurZucker", "gante", "LysandreJik", "molbap", "qubvel", "Rocketknight1", "muellerzr", "SunMarc", ] # Counting the number of failures grouped by authors new_data = {} for model, model_result in data.items(): for device, failed_tests in model_result.items(): for failed_test in failed_tests: author = failed_test["author"] if author not in team_members: author = failed_test["merged_by"] if author not in new_data: new_data[author] = Counter() new_data[author].update([model]) for author in new_data: new_data[author] = dict(new_data[author]) # Group by author new_data_full = {author: deepcopy(data) for author in new_data} for author, _data in new_data_full.items(): for model, model_result in _data.items(): for device, failed_tests in model_result.items(): # prepare job_link and add it to each entry of new failed test information. # need to change from `single-gpu` to `single` and same for `multi-gpu` to match `job_link`. key = model if list(job_links.keys()) == [job_name]: key = job_name job_link = job_links[key][device.replace("-gpu", "")] failed_tests = [x for x in failed_tests if x["author"] == author or x["merged_by"] == author] for x in failed_tests: x.update({"job_link": job_link}) model_result[device] = failed_tests _data[model] = {k: v for k, v in model_result.items() if len(v) > 0} new_data_full[author] = {k: v for k, v in _data.items() if len(v) > 0} # Upload to Hub and get the url # if it is not a scheduled run, upload the reports to a subfolder under `report_repo_folder` report_repo_subfolder = "" if os.getenv("GITHUB_EVENT_NAME") != "schedule": report_repo_subfolder = f"{os.getenv('GITHUB_RUN_NUMBER')}-{os.getenv('GITHUB_RUN_ID')}" report_repo_subfolder = f"runs/{report_repo_subfolder}" workflow_run = get_last_daily_ci_run( token=os.environ["ACCESS_REPO_INFO_TOKEN"], workflow_run_id=os.getenv("GITHUB_RUN_ID") ) workflow_run_created_time = workflow_run["created_at"] report_repo_folder = workflow_run_created_time.split("T")[0] if report_repo_subfolder: report_repo_folder = f"{report_repo_folder}/{report_repo_subfolder}" report_repo_id = os.getenv("REPORT_REPO_ID") with open("new_failures_with_bad_commit_grouped_by_authors.json", "w") as fp: json.dump(new_data_full, fp, ensure_ascii=False, indent=4) commit_info = api.upload_file( path_or_fileobj="new_failures_with_bad_commit_grouped_by_authors.json", path_in_repo=f"{report_repo_folder}/ci_results_{job_name}/new_failures_with_bad_commit_grouped_by_authors.json", repo_id=report_repo_id, repo_type="dataset", token=os.environ.get("TRANSFORMERS_CI_RESULTS_UPLOAD_TOKEN", None), ) url = f"https://huggingface.co/datasets/{report_repo_id}/raw/{commit_info.oid}/{report_repo_folder}/ci_results_{job_name}/new_failures_with_bad_commit_grouped_by_authors.json" # Add `GH_` prefix as keyword mention output = {} for author, item in new_data.items(): author = f"GH_{author}" output[author] = item report = f"<{url}|New failed tests>\\n\\n" report += json.dumps(output, indent=4).replace('"', '\\"').replace("\n", "\\n") print(report)
transformers/utils/process_bad_commit_report.py/0
{ "file_path": "transformers/utils/process_bad_commit_report.py", "repo_id": "transformers", "token_count": 2021 }
619
from transformers import ProcessorMixin class CustomProcessor(ProcessorMixin): feature_extractor_class = "AutoFeatureExtractor" tokenizer_class = "AutoTokenizer"
transformers/utils/test_module/custom_processing.py/0
{ "file_path": "transformers/utils/test_module/custom_processing.py", "repo_id": "transformers", "token_count": 51 }
620
# Distributing Training <Tip warning={true}> Section under construction. Feel free to contribute! </Tip> ## Multi-GPU Training with TRL The trainers in TRL use [🤗 Accelerate](https://github.com/huggingface/accelerate) to enable distributed training across multiple GPUs or nodes. To do so, first create an [🤗 Accelerate](https://github.com/huggingface/accelerate) config file by running ```bash accelerate config ``` and answering the questions according to your multi-GPU / multi-node setup. You can then launch distributed training by running: ```bash accelerate launch train.py ``` We also provide config files in the [examples folder](https://github.com/huggingface/trl/tree/main/examples/accelerate_configs) that can be used as templates. To use these templates, simply pass the path to the config file when launching a job, e.g.: ```shell accelerate launch --config_file examples/accelerate_configs/multi_gpu.yaml train.py <SCRIPT_ARGS> ``` This automatically distributes the workload across all available GPUs. Under the hood, [🤗 Accelerate](https://github.com/huggingface/accelerate) creates one model per GPU. Each process: - Processes its own batch of data - Computes the loss and gradients for that batch - Shares gradient updates across all GPUs ![](https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/multi_gpu.png) The effective batch size is calculated as: $$ \text{Batch Size} = \text{per\_device\_train\_batch\_size} \times \text{num\_devices} \times \text{gradient\_accumulation\_steps} $$ To maintain a consistent batch size when scaling to multiple GPUs, make sure to update `per_device_train_batch_size` and `gradient_accumulation_steps` accordingly. Example, these configurations are equivalent, and should yield the same results: | Number of GPUs | Per device batch size | Gradient accumulation steps | Comments | | --- | --- | --- | --- | | 1 | 32 | 1 | Possibly high memory usage, but faster training | | 1 | 4 | 8 | Lower memory usage, slower training | | 8 | 4 | 1 | Multi-GPU to get the best of both worlds | <Tip> Having one model per GPU can lead to high memory usage, which may not be feasible for large models or low-memory GPUs. In such cases, you can leverage [DeepSpeed](https://github.com/deepspeedai/DeepSpeed), which provides optimizations like model sharding, Zero Redundancy Optimizer, mixed precision training, and offloading to CPU or NVMe. Check out our [DeepSpeed Integration](deepspeed_integration) guide for more details. </Tip> ## Multi-Nodes Training We're working on a guide for multi-node training. Stay tuned! 🚀
trl/docs/source/distributing_training.md/0
{ "file_path": "trl/docs/source/distributing_training.md", "repo_id": "trl", "token_count": 736 }
621
# Speeding Up Training <Tip warning={true}> Section under construction. Feel free to contribute! </Tip> ## vLLM for fast generation in online methods Online methods such as GRPO or Online DPO require the model to generate completions, which is often a slow process and can significantly impact training time. To speed up generation, you can use [vLLM](https://github.com/vllm-project/vllm), a library that enables fast generation through, among other things, PagedAttention. TRL's online trainers support vLLM, greatly improving training speed. To use [vLLM](https://github.com/vllm-project/vllm), first install it using: ```bash pip install vllm ``` or ```bash pip install "trl[vllm]" ``` <hfoptions id="vllm examples"> <hfoption id="Online DPO"> Then, enable it by passing `use_vllm=True` in the training arguments. ```python from trl import OnlineDPOConfig training_args = OnlineDPOConfig(..., use_vllm=True) ``` </hfoption> <hfoption id="GRPO"> First, start a vLLM server by running: ```bash trl vllm-serve --model <model_name> ``` Then, run the training script and pass `use_vllm=True` in the training arguments. ```python from trl import GRPOConfig training_args = GRPOConfig(..., use_vllm=True) ``` You can customize the server configuration by passing additional arguments. For more information, see [vLLM integration](vllm_integration). <Tip warning={true}> When using vLLM, ensure that the GPUs assigned for training and generation are separate to avoid resource conflicts. For instance, if you plan to use 4 GPUs for training and another 4 for vLLM generation, you can specify GPU allocation using `CUDA_VISIBLE_DEVICES`. Set GPUs **0-3** for vLLM generation: ```sh CUDA_VISIBLE_DEVICES=0,1,2,3 trl vllm-serve --model <model_name> ``` And GPUs **4-7** for training: ```sh CUDA_VISIBLE_DEVICES=4,5,6,7 accelerate launch train.py ``` </Tip> </hfoption> </hfoptions>
trl/docs/source/speeding_up_training.md/0
{ "file_path": "trl/docs/source/speeding_up_training.md", "repo_id": "trl", "token_count": 627 }
622
# Copyright 2020-2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import ast from dataclasses import dataclass, field from typing import Optional from datasets import load_dataset from huggingface_hub import ModelCard from transformers import HfArgumentParser @dataclass class ScriptArguments: r""" Arguments for the script. Args: push_to_hub (`bool`, *optional*, defaults to `False`): Whether to push the dataset to the Hugging Face Hub. repo_id (`str`, *optional*, defaults to `"trl-lib/llava-instruct-mix"`): Hugging Face repository ID to push the dataset to. dataset_num_proc (`int` or `None`, *optional*, defaults to `None`): Number of workers to use for dataset processing. """ push_to_hub: bool = field( default=False, metadata={"help": "Whether to push the dataset to the Hugging Face Hub."}, ) repo_id: str = field( default="trl-lib/llava-instruct-mix", metadata={"help": "Hugging Face repository ID to push the dataset to."}, ) dataset_num_proc: Optional[int] = field( default=None, metadata={"help": "Number of workers to use for dataset processing."}, ) def process_example(example): messages = [] for message in ast.literal_eval(example["conversations"]): content = message["value"] content = content.replace("<image>", "").strip() role = "user" if message["from"] == "human" else "assistant" messages.append({"role": role, "content": content}) return {"messages": messages, "images": [example["image"]]} def filter_long_examples(example): total_length = sum(len(msg["content"]) for msg in example["messages"]) return total_length <= 1000 def split_prompt_completion(example): """ Splits the messages into a prompt and a completion. The last message is considered the completion. """ assert len(example["messages"]) > 1 example["prompt"] = example["messages"][:-1] example["completion"] = example["messages"][-1:] return example model_card = ModelCard(""" --- tags: [trl] --- # LLaVA Instruct Mix ## Summary The LLaVA Instruct Mix dataset is a processed version of [LLaVA Instruct Mix](https://huggingface.co/datasets/theblackcat102/llava-instruct-mix). ## Data Structure - **Format**: [Conversational](https://huggingface.co/docs/trl/main/dataset_formats#conversational) - **Type**: [Language-modeling](https://huggingface.co/docs/trl/main/dataset_formats#language-modeling) Columns: - `"images"`: The image associated with the text. - `"prompt"`: A list of messages that form the context for the conversation. - `"completion"`: The last message in the conversation, which is the model's response. This structure allows models to learn from the context of the conversation, enhancing their understanding of how to generate descriptive text based on visual inputs. ## Generation script The script used to generate this dataset can be found [here](https://github.com/huggingface/trl/blob/main/examples/datasets/llava_instruct_mix.py). """) if __name__ == "__main__": parser = HfArgumentParser(ScriptArguments) script_args = parser.parse_args_into_dataclasses()[0] dataset = load_dataset("theblackcat102/llava-instruct-mix", split="train", num_proc=script_args.dataset_num_proc) dataset = dataset.map( process_example, remove_columns=["conversations", "image"], num_proc=script_args.dataset_num_proc ) dataset = dataset.filter(filter_long_examples, num_proc=script_args.dataset_num_proc) dataset = dataset.map(split_prompt_completion, remove_columns=["messages"], num_proc=script_args.dataset_num_proc) if script_args.push_to_hub: dataset.push_to_hub(script_args.repo_id, num_proc=script_args.dataset_num_proc) model_card.push_to_hub(script_args.repo_id, repo_type="dataset")
trl/examples/datasets/llava_instruct_mix.py/0
{ "file_path": "trl/examples/datasets/llava_instruct_mix.py", "repo_id": "trl", "token_count": 1514 }
623
# Copyright 2020-2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import config import torch from torch.utils import benchmark from transformers import AutoModelForCausalLM, AutoTokenizer def generate_tokens(model, inputs): outputs = model.generate( **inputs, do_sample=False, max_new_tokens=64, ) return outputs def generate_tokens_with_assistance(model, inputs, assistant_early_exit): outputs = model.generate( **inputs, assistant_early_exit=assistant_early_exit, do_sample=False, max_new_tokens=64, ) return outputs if __name__ == "__main__": ckpt = config.hub_model_id model = AutoModelForCausalLM.from_pretrained(ckpt, device_map="auto", torch_dtype=torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained(ckpt) prompt = "### Instruction: What are my alarms for the rest of the day?\n ### Response: " results = [] label = "Generation Times" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) results.append( benchmark.Timer( stmt="generate_tokens(model, inputs)", setup="from __main__ import generate_tokens", globals={"model": model, "inputs": inputs}, num_threads=torch.get_num_threads(), label=label, sub_label="no layer skip", description="generation", ).blocked_autorange() ) for i in range(1, model.config.num_hidden_layers): results.append( benchmark.Timer( stmt="generate_tokens_with_assistance(model, inputs, assistant_early_exit)", setup="from __main__ import generate_assistant_tokens", globals={"model": model, "assistant_early_exit": i, "inputs": inputs}, num_threads=torch.get_num_threads(), label=label, sub_label=f"layer skip {i}", description="generation", ).blocked_autorange() ) benchmark.Compare(results).print()
trl/examples/research_projects/layer_skip/scripts/benchmark_layer_skip.py/0
{ "file_path": "trl/examples/research_projects/layer_skip/scripts/benchmark_layer_skip.py", "repo_id": "trl", "token_count": 1032 }
624
# Copyright 2020-2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # /// script # dependencies = [ # "trl @ git+https://github.com/huggingface/trl.git", # "Pillow>=9.4.0", # ] # /// """ Total Batch size = 128 = 4 (num_gpus) * 8 (per_device_batch) * 4 (accumulation steps) Feel free to reduce batch size or increasing truncated_rand_backprop_min to a higher value to reduce memory usage. CUDA_VISIBLE_DEVICES=0,1,2,3 python examples/scripts/alignprop.py \ --num_epochs=20 \ --train_gradient_accumulation_steps=4 \ --sample_num_steps=50 \ --train_batch_size=8 \ --tracker_project_name="stable_diffusion_training" \ --log_with="wandb" """ from dataclasses import dataclass, field import numpy as np from transformers import HfArgumentParser from trl import AlignPropConfig, AlignPropTrainer, DefaultDDPOStableDiffusionPipeline from trl.models.auxiliary_modules import aesthetic_scorer @dataclass class ScriptArguments: r""" Arguments for the script. Args: pretrained_model (`str`, *optional*, defaults to `"runwayml/stable-diffusion-v1-5"`): Pretrained model to use. pretrained_revision (`str`, *optional*, defaults to `"main"`): Pretrained model revision to use. hf_hub_model_id (`str`, *optional*, defaults to `"alignprop-finetuned-stable-diffusion"`): HuggingFace repo to save model weights to. hf_hub_aesthetic_model_id (`str`, *optional*, defaults to `"trl-lib/ddpo-aesthetic-predictor"`): Hugging Face model ID for aesthetic scorer model weights. hf_hub_aesthetic_model_filename (`str`, *optional*, defaults to `"aesthetic-model.pth"`): Hugging Face model filename for aesthetic scorer model weights. use_lora (`bool`, *optional*, defaults to `True`): Whether to use LoRA. """ pretrained_model: str = field( default="runwayml/stable-diffusion-v1-5", metadata={"help": "Pretrained model to use."} ) pretrained_revision: str = field(default="main", metadata={"help": "Pretrained model revision to use."}) hf_hub_model_id: str = field( default="alignprop-finetuned-stable-diffusion", metadata={"help": "HuggingFace repo to save model weights to."} ) hf_hub_aesthetic_model_id: str = field( default="trl-lib/ddpo-aesthetic-predictor", metadata={"help": "Hugging Face model ID for aesthetic scorer model weights."}, ) hf_hub_aesthetic_model_filename: str = field( default="aesthetic-model.pth", metadata={"help": "Hugging Face model filename for aesthetic scorer model weights."}, ) use_lora: bool = field(default=True, metadata={"help": "Whether to use LoRA."}) # list of example prompts to feed stable diffusion animals = [ "cat", "dog", "horse", "monkey", "rabbit", "zebra", "spider", "bird", "sheep", "deer", "cow", "goat", "lion", "frog", "chicken", "duck", "goose", "bee", "pig", "turkey", "fly", "llama", "camel", "bat", "gorilla", "hedgehog", "kangaroo", ] def prompt_fn(): return np.random.choice(animals), {} def image_outputs_logger(image_pair_data, global_step, accelerate_logger): # For the sake of this example, we will only log the last batch of images # and associated data result = {} images, prompts, _ = [image_pair_data["images"], image_pair_data["prompts"], image_pair_data["rewards"]] for i, image in enumerate(images[:4]): prompt = prompts[i] result[f"{prompt}"] = image.unsqueeze(0).float() accelerate_logger.log_images( result, step=global_step, ) if __name__ == "__main__": parser = HfArgumentParser((ScriptArguments, AlignPropConfig)) script_args, training_args = parser.parse_args_into_dataclasses() training_args.project_kwargs = { "logging_dir": "./logs", "automatic_checkpoint_naming": True, "total_limit": 5, "project_dir": "./save", } pipeline = DefaultDDPOStableDiffusionPipeline( script_args.pretrained_model, pretrained_model_revision=script_args.pretrained_revision, use_lora=script_args.use_lora, ) trainer = AlignPropTrainer( training_args, aesthetic_scorer(script_args.hf_hub_aesthetic_model_id, script_args.hf_hub_aesthetic_model_filename), prompt_fn, pipeline, image_samples_hook=image_outputs_logger, ) trainer.train() # Save and push to hub trainer.save_model(training_args.output_dir) if training_args.push_to_hub: trainer.push_to_hub(dataset_name=script_args.dataset_name)
trl/examples/scripts/alignprop.py/0
{ "file_path": "trl/examples/scripts/alignprop.py", "repo_id": "trl", "token_count": 2068 }
625
# Copyright 2020-2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # /// script # dependencies = [ # "trl @ git+https://github.com/huggingface/trl.git", # "peft", # ] # /// import shutil import torch from accelerate import PartialState from datasets import load_dataset from transformers import ( AutoModelForCausalLM, AutoModelForSequenceClassification, AutoTokenizer, HfArgumentParser, ) from trl import ( ModelConfig, PPOConfig, PPOTrainer, ScriptArguments, get_kbit_device_map, get_peft_config, get_quantization_config, ) from trl.trainer.utils import SIMPLE_CHAT_TEMPLATE """ python -i examples/scripts/ppo/ppo.py \ --dataset_name trl-internal-testing/descriptiveness-sentiment-trl-style \ --dataset_train_split descriptiveness \ --learning_rate 3e-6 \ --output_dir models/minimal/ppo \ --per_device_train_batch_size 64 \ --gradient_accumulation_steps 1 \ --total_episodes 10000 \ --model_name_or_path EleutherAI/pythia-1b-deduped \ --missing_eos_penalty 1.0 accelerate launch --config_file examples/accelerate_configs/deepspeed_zero3.yaml \ examples/scripts/ppo/ppo.py \ --dataset_name trl-internal-testing/descriptiveness-sentiment-trl-style \ --dataset_train_split descriptiveness \ --output_dir models/minimal/ppo \ --num_ppo_epochs 1 \ --num_mini_batches 1 \ --learning_rate 3e-6 \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 16 \ --total_episodes 10000 \ --model_name_or_path EleutherAI/pythia-1b-deduped \ --sft_model_path EleutherAI/pythia-1b-deduped \ --reward_model_path EleutherAI/pythia-1b-deduped \ --local_rollout_forward_batch_size 1 \ --missing_eos_penalty 1.0 """ if __name__ == "__main__": parser = HfArgumentParser((ScriptArguments, PPOConfig, ModelConfig)) script_args, training_args, model_args = parser.parse_args_into_dataclasses() # remove output_dir if exists shutil.rmtree(training_args.output_dir, ignore_errors=True) ################ # Model & Tokenizer ################ torch_dtype = ( model_args.torch_dtype if model_args.torch_dtype in ["auto", None] else getattr(torch, model_args.torch_dtype) ) quantization_config = get_quantization_config(model_args) model_kwargs = dict( revision=model_args.model_revision, attn_implementation=model_args.attn_implementation, torch_dtype=torch_dtype, device_map=get_kbit_device_map() if quantization_config is not None else None, quantization_config=quantization_config, ) tokenizer = AutoTokenizer.from_pretrained( model_args.model_name_or_path, padding_side="left", trust_remote_code=model_args.trust_remote_code ) tokenizer.add_special_tokens({"pad_token": "[PAD]"}) if tokenizer.chat_template is None: tokenizer.chat_template = SIMPLE_CHAT_TEMPLATE value_model = AutoModelForSequenceClassification.from_pretrained( training_args.reward_model_path, trust_remote_code=model_args.trust_remote_code, num_labels=1 ) reward_model = AutoModelForSequenceClassification.from_pretrained( training_args.reward_model_path, trust_remote_code=model_args.trust_remote_code, num_labels=1 ) policy = AutoModelForCausalLM.from_pretrained( training_args.sft_model_path, trust_remote_code=model_args.trust_remote_code ) peft_config = get_peft_config(model_args) if peft_config is None: ref_policy = AutoModelForCausalLM.from_pretrained( training_args.sft_model_path, trust_remote_code=model_args.trust_remote_code ) else: ref_policy = None ################ # Dataset ################ dataset = load_dataset( script_args.dataset_name, name=script_args.dataset_config, split=script_args.dataset_train_split ) eval_samples = 100 train_dataset = dataset.select(range(len(dataset) - eval_samples)) eval_dataset = dataset.select(range(len(dataset) - eval_samples, len(dataset))) dataset_text_field = "prompt" def prepare_dataset(dataset, tokenizer): """pre-tokenize the dataset before training; only collate during training""" def tokenize(element): outputs = tokenizer( element[dataset_text_field], padding=False, ) return {"input_ids": outputs["input_ids"]} return dataset.map( tokenize, batched=True, remove_columns=dataset.column_names, num_proc=training_args.dataset_num_proc, ) # Compute that only on the main process for faster data processing. # see: https://github.com/huggingface/trl/pull/1255 with PartialState().local_main_process_first(): train_dataset = prepare_dataset(train_dataset, tokenizer) eval_dataset = prepare_dataset(eval_dataset, tokenizer) ################ # Training ################ trainer = PPOTrainer( args=training_args, processing_class=tokenizer, model=policy, ref_model=ref_policy, reward_model=reward_model, value_model=value_model, train_dataset=train_dataset, eval_dataset=eval_dataset, peft_config=peft_config, ) trainer.train() # Save and push to hub trainer.save_model(training_args.output_dir) if training_args.push_to_hub: trainer.push_to_hub(dataset_name=script_args.dataset_name) trainer.generate_completions()
trl/examples/scripts/ppo/ppo.py/0
{ "file_path": "trl/examples/scripts/ppo/ppo.py", "repo_id": "trl", "token_count": 2452 }
626
# Copyright 2020-2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from dataclasses import dataclass, field from datasets import Dataset from transformers import HfArgumentParser @dataclass class ScriptArguments: r""" Arguments for the script. Args: test_size (`float`, *optional*, defaults to `0.1`): Fraction of the dataset to include in the test split. push_to_hub (`bool`, *optional*, defaults to `False`): Whether to push the dataset to the Hugging Face Hub. repo_id (`str`, *optional*, defaults to `"trl-internal-testing/harmony"`): Hugging Face repository ID to push the dataset to. """ test_size: float = field( default=0.1, metadata={"help": "Fraction of the dataset to include in the test split."}, ) push_to_hub: bool = field( default=False, metadata={"help": "Whether to push the dataset to the Hugging Face Hub."}, ) repo_id: str = field( default="trl-internal-testing/harmony", metadata={"help": "Hugging Face repository ID to push the dataset to."}, ) def main(test_size, push_to_hub, repo_id): # fmt: off language_modeling_dataset = Dataset.from_dict({ "messages": [ [{"role": "user", "content": "What is better than ugly?"}, {"role": "assistant", "thinking": "Beauty improves clarity and appeal.", "content": "Beautiful."}], [{"role": "user", "content": "What is better than implicit?"}, {"role": "assistant", "thinking": "Clarity avoids confusion.", "content": "Explicit."}], [{"role": "user", "content": "What is better than complex?"}, {"role": "assistant", "thinking": "Simplicity is easier to manage.", "content": "Simple."}], [{"role": "user", "content": "What is better than complicated?"}, {"role": "assistant", "thinking": "Complexity has structure; complication doesn’t.", "content": "Complex."}], [{"role": "user", "content": "What is better than nested?"}, {"role": "assistant", "thinking": "Flat structures are easier to read.", "content": "Flat."}], [{"role": "user", "content": "What is better than dense?"}, {"role": "assistant", "thinking": "Spacing aids understanding.", "content": "Sparse."}], [{"role": "user", "content": "What counts?"}, {"role": "assistant", "thinking": "Readable code lasts longer.", "content": "Readability."}], [{"role": "user", "content": "Are special cases enough to break the rules?"}, {"role": "assistant", "thinking": "Consistency is more valuable.", "content": "No, special cases aren't special enough to break the rules."}], [{"role": "user", "content": "What beats purity?"}, {"role": "assistant", "thinking": "Real-world needs outweigh ideals.", "content": "Practicality."}], [{"role": "user", "content": "What should never pass silently?"}, {"role": "assistant", "thinking": "Silent errors cause hidden issues.", "content": "Errors."}], [{"role": "user", "content": "When can errors pass silently?"}, {"role": "assistant", "thinking": "Silence must be intentional.", "content": "When explicitly silenced."}], [{"role": "user", "content": "What should you do in the face of ambiguity?"}, {"role": "assistant", "thinking": "Guessing leads to wrong choices.", "content": "Refuse the temptation to guess."}], [{"role": "user", "content": "How many ways should there be to do it?"}, {"role": "assistant", "thinking": "One way reduces confusion.", "content": "One, and preferably only one."}], [{"role": "user", "content": "For whom may the way not be obvious at first?"}, {"role": "assistant", "thinking": "A playful nod to Python’s creator.", "content": "Dutch."}], [{"role": "user", "content": "What is better than never?"}, {"role": "assistant", "thinking": "Action is better than endless delay.", "content": "Now is better than never."}], [{"role": "user", "content": "Is never better than *right* now?"}, {"role": "assistant", "thinking": "Rushed action can be worse.", "content": "Yes, often."}], [{"role": "user", "content": "What does it mean if the implementation is hard to explain?"}, {"role": "assistant", "thinking": "Good ideas should be clear.", "content": "It means it's a bad idea."}], [{"role": "user", "content": "What does it mean if the implementation is easy to explain?"}, {"role": "assistant", "thinking": "Clarity suggests soundness.", "content": "It means it may be a good idea."}], [{"role": "user", "content": "Any great ideas?"}, {"role": "assistant", "thinking": "Namespaces prevent conflicts.", "content": "Namespaces are one honking great idea."}] ], "chat_template_kwargs": [ {"reasoning_effort": "low", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "medium", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "high", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "low", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "medium", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "high", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "low", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "medium", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "high", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "low", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "medium", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "high", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "low", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "medium", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "high", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "low", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "medium", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "high", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "low", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, ] }) language_modeling_dataset = language_modeling_dataset.train_test_split(test_size=test_size, shuffle=False) if push_to_hub: language_modeling_dataset.push_to_hub(repo_id, config_name="language_modeling") language_modeling_dataset.save_to_disk(repo_id + "/language_modeling") prompt_completion_dataset = Dataset.from_dict({ "prompt": [ [{"role": "user", "content": "What is better than ugly?"}], [{"role": "user", "content": "What is better than implicit?"}], [{"role": "user", "content": "What is better than complex?"}], [{"role": "user", "content": "What is better than complicated?"}], [{"role": "user", "content": "What is better than nested?"}], [{"role": "user", "content": "What is better than dense?"}], [{"role": "user", "content": "What counts?"}], [{"role": "user", "content": "Are special cases enough to break the rules?"}], [{"role": "user", "content": "What beats purity?"}], [{"role": "user", "content": "What should never pass silently?"}], [{"role": "user", "content": "When can errors pass silently?"}], [{"role": "user", "content": "What should you do in the face of ambiguity?"}], [{"role": "user", "content": "How many ways should there be to do it?"}], [{"role": "user", "content": "For whom may the way not be obvious at first?"}], [{"role": "user", "content": "What is better than never?"}], [{"role": "user", "content": "Is never better than *right* now?"}], [{"role": "user", "content": "What does it mean if the implementation is hard to explain?"}], [{"role": "user", "content": "What does it mean if the implementation is easy to explain?"}], [{"role": "user", "content": "Any great ideas?"}], ], "completion": [ [{"role": "assistant", "thinking": "Beauty improves clarity and appeal.", "content": "Beautiful."}], [{"role": "assistant", "thinking": "Clarity avoids confusion.", "content": "Explicit."}], [{"role": "assistant", "thinking": "Simplicity is easier to manage.", "content": "Simple."}], [{"role": "assistant", "thinking": "Complexity has structure; complication doesn’t.", "content": "Complex."}], [{"role": "assistant", "thinking": "Flat structures are easier to read.", "content": "Flat."}], [{"role": "assistant", "thinking": "Spacing aids understanding.", "content": "Sparse."}], [{"role": "assistant", "thinking": "Readable code lasts longer.", "content": "Readability."}], [{"role": "assistant", "thinking": "Consistency is more valuable.", "content": "No, special cases aren't special enough to break the rules."}], [{"role": "assistant", "thinking": "Real-world needs outweigh ideals.", "content": "Practicality."}], [{"role": "assistant", "thinking": "Silent errors cause hidden issues.", "content": "Errors."}], [{"role": "assistant", "thinking": "Silence must be intentional.", "content": "When explicitly silenced."}], [{"role": "assistant", "thinking": "Guessing leads to wrong choices.", "content": "Refuse the temptation to guess."}], [{"role": "assistant", "thinking": "One way reduces confusion.", "content": "One, and preferably only one."}], [{"role": "assistant", "thinking": "A playful nod to Python’s creator.", "content": "Dutch."}], [{"role": "assistant", "thinking": "Action is better than endless delay.", "content": "Now is better than never."}], [{"role": "assistant", "thinking": "Rushed action can be worse.", "content": "Yes, often."}], [{"role": "assistant", "thinking": "Good ideas should be clear.", "content": "It means it's a bad idea."}], [{"role": "assistant", "thinking": "Clarity suggests soundness.", "content": "It means it may be a good idea."}], [{"role": "assistant", "thinking": "Namespaces prevent conflicts.", "content": "Namespaces are one honking great idea."}], ], "chat_template_kwargs": [ {"reasoning_effort": "low", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "medium", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "high", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "low", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "medium", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "high", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "low", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "medium", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "high", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "low", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "medium", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "high", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "low", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "medium", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "high", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "low", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "medium", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "high", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, {"reasoning_effort": "low", "model_identity": "You are Tiny ChatGPT, a tiny language model."}, ] }) prompt_completion_dataset = prompt_completion_dataset.train_test_split(test_size=test_size, shuffle=False) if push_to_hub: prompt_completion_dataset.push_to_hub(repo_id, config_name="prompt_completion") prompt_completion_dataset.save_to_disk(repo_id + "/prompt_completion") # fmt: on if __name__ == "__main__": parser = HfArgumentParser(ScriptArguments) script_args = parser.parse_args_into_dataclasses()[0] main(script_args.test_size, script_args.push_to_hub, script_args.repo_id)
trl/scripts/generate_harmony_dataset.py/0
{ "file_path": "trl/scripts/generate_harmony_dataset.py", "repo_id": "trl", "token_count": 5240 }
627
# Copyright 2020-2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. MODELS_TO_TEST = [ "trl-internal-testing/tiny-LlamaForCausalLM-3.2", "trl-internal-testing/tiny-MistralForCausalLM-0.2", ] # We could have also not declared these variables but let's be verbose PACKING_OPTIONS = [True, False] GRADIENT_CHECKPOINTING_KWARGS = [None, {"use_reentrant": False}, {"use_reentrant": True}] DEVICE_MAP_OPTIONS = [{"": 0}, "auto"] DPO_LOSS_TYPES = ["sigmoid", "ipo"] DPO_PRECOMPUTE_LOGITS = [True, False]
trl/tests/slow/testing_constants.py/0
{ "file_path": "trl/tests/slow/testing_constants.py", "repo_id": "trl", "token_count": 344 }
628
# Copyright 2020-2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from unittest.mock import patch import torch from datasets import load_dataset from parameterized import parameterized from transformers import ( AutoModelForCausalLM, AutoModelForImageTextToText, AutoModelForSequenceClassification, AutoTokenizer, ) from transformers.testing_utils import require_liger_kernel, require_peft, require_vision from transformers.utils import is_peft_available from trl import GRPOConfig, GRPOTrainer from trl.trainer.grpo_trainer import ( RepeatSampler, shuffle_sequence_dict, split_pixel_values_by_grid, split_tensor_dict, truncate_with_protected_tokens, unsplit_pixel_values_by_grid, ) from .testing_utils import TrlTestCase, require_vllm if is_peft_available(): from peft import LoraConfig, PeftModel class SplitTensorDictTester(TrlTestCase): def test_split_equal_chunks(self): x = torch.arange(12).reshape(6, 2) y = torch.arange(6).reshape(6, 1) tensor_dict = {"x": x, "y": y} result = split_tensor_dict(tensor_dict, 3) expected_x_chunks = torch.chunk(x, 3, dim=0) expected_y_chunks = torch.chunk(y, 3, dim=0) self.assertEqual(len(result), 3) for i in range(3): self.assertTrue(torch.equal(result[i]["x"], expected_x_chunks[i])) self.assertTrue(torch.equal(result[i]["y"], expected_y_chunks[i])) def test_with_none_tensor(self): x = torch.arange(12).reshape(6, 2) tensor_dict = {"x": x, "y": None} result = split_tensor_dict(tensor_dict, 2) expected_x_chunks = torch.chunk(x, 2, dim=0) self.assertEqual(len(result), 2) for i in range(2): self.assertTrue(torch.equal(result[i]["x"], expected_x_chunks[i])) self.assertIsNone(result[i]["y"]) def test_with_scalar(self): x = torch.arange(12).reshape(6, 2) tensor_dict = {"x": x, "y": torch.tensor(1)} result = split_tensor_dict(tensor_dict, 2) expected_x_chunks = torch.chunk(x, 2, dim=0) self.assertEqual(len(result), 2) for i in range(2): self.assertTrue(torch.equal(result[i]["x"], expected_x_chunks[i])) self.assertTrue(torch.equal(result[i]["y"], torch.tensor(1))) class ShuffleSequenceDictTester(TrlTestCase): def test_shuffle_preserves_shape(self): x = torch.arange(6).reshape(3, 2) y = torch.arange(3).reshape(3, 1) tensor_dict = {"x": x.clone(), "y": y.clone()} shuffled = shuffle_sequence_dict(tensor_dict) self.assertEqual(shuffled["x"].shape, x.shape) self.assertEqual(shuffled["y"].shape, y.shape) def test_shuffle_consistent_across_tensors(self): # Use known patterns to check alignment x = torch.tensor([[10, 11], [20, 21], [30, 31]]) y = torch.tensor([[1], [2], [3]]) tensor_dict = {"x": x.clone(), "y": y.clone()} shuffled = shuffle_sequence_dict(tensor_dict) # Build a reverse map from shuffled x rows to y values for i in range(3): x_row = shuffled["x"][i] y_val = shuffled["y"][i].item() if torch.equal(x_row, torch.tensor([10, 11])): self.assertEqual(y_val, 1) elif torch.equal(x_row, torch.tensor([20, 21])): self.assertEqual(y_val, 2) elif torch.equal(x_row, torch.tensor([30, 31])): self.assertEqual(y_val, 3) else: self.fail("Unexpected x row in shuffled output.") def test_none_tensor_remains_none(self): x = torch.arange(6).reshape(3, 2) tensor_dict = {"x": x.clone(), "y": None} shuffled = shuffle_sequence_dict(tensor_dict) self.assertIsNone(shuffled["y"]) self.assertEqual(shuffled["x"].shape, x.shape) def test_shuffle_with_list(self): x = torch.tensor([[10, 11], [20, 21], [30, 31]]) y = ["a", "b", "c"] sequence_dict = {"x": x.clone(), "y": y} shuffled = shuffle_sequence_dict(sequence_dict) # Check that the list y is shuffled in the same order as x for i in range(3): x_row = shuffled["x"][i] y_val = shuffled["y"][i] if torch.equal(x_row, torch.tensor([10, 11])): self.assertEqual(y_val, "a") elif torch.equal(x_row, torch.tensor([20, 21])): self.assertEqual(y_val, "b") elif torch.equal(x_row, torch.tensor([30, 31])): self.assertEqual(y_val, "c") else: self.fail("Unexpected x row in shuffled output.") class RepeatRandomSamplerTester(TrlTestCase): def test_sampler(self): dataset = ["a", "b", "c", "d", "e", "f", "g"] sampler = RepeatSampler(dataset, mini_repeat_count=2) # Should output something like [4, 4, 3, 3, 0, 0, 1, 1, 2, 2, 6, 6, 5, 5] sampled = list(sampler) # Check that the length is doubled assert len(sampled) == 2 * len(dataset) # Check that all indexes are present assert set(sampled) == set(range(len(dataset))) # Check that each element is repeated twice assert all(sampled[i] == sampled[i + 1] for i in range(0, len(sampled), 2)) def test_sampler_no_shuffle(self): dataset = ["a", "b", "c", "d", "e", "f", "g"] sampler = RepeatSampler(dataset, mini_repeat_count=2, shuffle=False) sampled = list(sampler) expected = [0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6] self.assertEqual(sampled, expected) def test_sampler_no_repeat(self): dataset = ["a", "b", "c", "d", "e", "f", "g"] sampler = RepeatSampler(dataset, mini_repeat_count=1) # Should output something like [4, 3, 0, 1, 2, 6, 5] sampled = list(sampler) # Check that the length is the same assert len(sampled) == len(dataset) # Check that all indexes are present assert set(sampled) == set(range(len(dataset))) def test_sampler_with_batch_size(self): dataset = ["a", "b", "c", "d", "e", "f", "g", "h"] sampler = RepeatSampler(dataset, mini_repeat_count=1, batch_size=2, repeat_count=2) # Should output something like [4, 3, 4, 3, 0, 1, 0, 1, 2, 6, 2, 6, 5, 7, 5, 7] sampled = list(sampler) # Check that the length is doubled assert len(sampled) == 2 * len(dataset) # Check that all indexes are present assert set(sampled) == set(range(len(dataset))) # Check that each element is repeated as expected assert all(sampled[i : i + 1] == sampled[i + 2 : i + 3] for i in range(0, len(sampled), 4)) def test_sampler_with_batch_size_and_drop(self): dataset = ["a", "b", "c", "d", "e", "f", "g"] sampler = RepeatSampler(dataset, mini_repeat_count=1, batch_size=2, repeat_count=2) # Should output something like [4, 3, 4, 3, 0, 1, 0, 1, 2, 6, 2, 6] sampled = list(sampler) # Check that the length is doubled assert len(sampled) == 2 * ( len(dataset) - 1 ) # one element is dropped, because it's not enough to form a batch assert len(sampler) == len(sampled) # the length should be the same as the sampled length # Check that the sampled indexes are a subset of the dataset indexes assert set(sampled).issubset(set(range(len(dataset)))) # Check that each element is repeated as expected assert all(sampled[i : i + 1] == sampled[i + 2 : i + 3] for i in range(0, len(sampled), 4)) def test_sampler_with_mini_repeat_count_and_batch_size_1(self): dataset = ["a", "b", "c", "d", "e", "f", "g"] sampler = RepeatSampler(dataset, mini_repeat_count=2, batch_size=3, repeat_count=2) # Should output something like [4, 4, 3, 3, 0, 0, 4, 4, 3, 3, 0, 0, # 1, 1, 2, 2, 6, 6, 1, 1, 2, 2, 6, 6] sampled = list(sampler) # Check that the length is quadrupled assert len(sampled) == 4 * (len(dataset) - 1) # 1 element is dropped, because it's not enough to form a batch assert len(sampler) == len(sampled) # the length should be the same as the sampled length # Check that the sampled indexes are a subset of the dataset indexes assert set(sampled).issubset(set(range(len(dataset)))) # Check that each element is repeated as expected assert all(sampled[i] == sampled[i + 1] for i in range(0, len(sampled), 2)) # Check that the batch is repeated as expected assert sampled[0:6] == sampled[6:12] assert sampled[12:18] == sampled[18:24] def test_sampler_with_mini_repeat_count_and_batch_size_2(self): dataset = ["a", "b", "c", "d", "e", "f", "g"] sampler = RepeatSampler(dataset, mini_repeat_count=3, batch_size=2, repeat_count=2) # Should output something like [4, 4, 4, 3, 3, 3, 4, 4, 4, 3, 3, 3, # 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, # 2, 2, 2, 6, 6, 6, 2, 2, 2, 6, 6, 6] sampled = list(sampler) # Check that the length is sextupled assert len(sampled) == 6 * (len(dataset) - 1) # 1 element is dropped, because it's not enough to form a batch assert len(sampler) == len(sampled) # the length should be the same as the sampled length # Check that the sampled indexes are a subset of the dataset indexes assert set(sampled).issubset(set(range(len(dataset)))) # Check that each element is repeated as expected assert all(sampled[i] == sampled[i + 1] == sampled[i + 2] for i in range(0, len(sampled), 3)) # Check that the batch is repeated as expected assert sampled[0:6] == sampled[6:12] assert sampled[12:18] == sampled[18:24] assert sampled[24:30] == sampled[30:36] def test_sampler_with_mini_repeat_count_and_batch_size_3(self): dataset = ["a", "b", "c", "d", "e", "f", "g"] sampler = RepeatSampler(dataset, mini_repeat_count=2, batch_size=2, repeat_count=3) # Should output something like [4, 4, 3, 3, 4, 4, 3, 3, 4, 4, 3, 3, # 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, # 2, 2, 6, 6, 2, 2, 6, 6, 2, 2, 6, 6] sampled = list(sampler) # Check that the length is sextupled assert len(sampled) == 6 * (len(dataset) - 1) # 1 element is dropped, because it's not enough to form a batch # Check that the sampled indexes are a subset of the dataset indexes assert set(sampled).issubset(set(range(len(dataset)))) # Check that each element is repeated as expected assert all(sampled[i] == sampled[i + 1] for i in range(0, len(sampled), 2)) # Check that the batch is repeated as expected assert sampled[0:4] == sampled[4:8] == sampled[8:12] assert sampled[12:16] == sampled[16:20] == sampled[20:24] assert sampled[24:28] == sampled[28:32] == sampled[32:36] class TruncateWithProtectedTokensTester(TrlTestCase): def test_basic_example(self): """Test the basic example from the problem description.""" prompt_ids = torch.tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]) prompt_mask = torch.ones_like(prompt_ids) protected_tokens = [2, 3, 6] target_length = 3 new_ids, new_mask = truncate_with_protected_tokens(prompt_ids, prompt_mask, target_length, protected_tokens) expected_ids = torch.tensor([[2, 3, 5], [6, 9, 10]]) expected_mask = torch.ones_like(expected_ids) self.assertTrue(torch.equal(new_ids, expected_ids)) self.assertTrue(torch.equal(new_mask, expected_mask)) def test_no_truncation_needed(self): """Test when target length equals current length.""" prompt_ids = torch.tensor([[1, 2, 3]]) prompt_mask = torch.ones_like(prompt_ids) protected_tokens = [2] target_length = 3 new_ids, new_mask = truncate_with_protected_tokens(prompt_ids, prompt_mask, target_length, protected_tokens) self.assertTrue(torch.equal(new_ids, prompt_ids)) self.assertTrue(torch.equal(new_mask, prompt_mask)) def test_no_protected_tokens(self): """Test truncation with no protected tokens (normal right truncation).""" prompt_ids = torch.tensor([[1, 2, 3, 4, 5]]) prompt_mask = torch.ones_like(prompt_ids) protected_tokens = [] target_length = 3 new_ids, new_mask = truncate_with_protected_tokens(prompt_ids, prompt_mask, target_length, protected_tokens) expected_ids = torch.tensor([[3, 4, 5]]) # Last 3 tokens self.assertTrue(torch.equal(new_ids, expected_ids)) def test_all_tokens_protected(self): """Test when all remaining tokens are protected.""" prompt_ids = torch.tensor([[1, 2, 3, 4, 5]]) prompt_mask = torch.ones_like(prompt_ids) protected_tokens = [3, 4, 5] target_length = 3 new_ids, new_mask = truncate_with_protected_tokens(prompt_ids, prompt_mask, target_length, protected_tokens) expected_ids = torch.tensor([[3, 4, 5]]) self.assertTrue(torch.equal(new_ids, expected_ids)) def test_too_many_protected_tokens(self): """Test error when too many protected tokens for target length.""" prompt_ids = torch.tensor([[1, 2, 3, 4, 5]]) prompt_mask = torch.ones_like(prompt_ids) protected_tokens = [1, 2, 3, 4] target_length = 3 with self.assertRaises(ValueError): truncate_with_protected_tokens(prompt_ids, prompt_mask, target_length, protected_tokens) def test_single_batch_single_token(self): """Test edge case with single batch and single token.""" prompt_ids = torch.tensor([[5]]) prompt_mask = torch.ones_like(prompt_ids) protected_tokens = [5] target_length = 1 new_ids, new_mask = truncate_with_protected_tokens(prompt_ids, prompt_mask, target_length, protected_tokens) self.assertTrue(torch.equal(new_ids, prompt_ids)) def test_mask_preservation(self): """Test that mask values are correctly preserved.""" prompt_ids = torch.tensor([[1, 2, 3, 4, 5]]) prompt_mask = torch.tensor([[1, 0, 1, 0, 1]]) # Mixed mask values protected_tokens = [2, 4] target_length = 3 new_ids, new_mask = truncate_with_protected_tokens(prompt_ids, prompt_mask, target_length, protected_tokens) expected_ids = torch.tensor([[2, 4, 5]]) expected_mask = torch.tensor([[0, 0, 1]]) # Corresponding mask values self.assertTrue(torch.equal(new_ids, expected_ids)) self.assertTrue(torch.equal(new_mask, expected_mask)) def test_multiple_batches_different_protected(self): """Test multiple batches where protected tokens appear differently.""" prompt_ids = torch.tensor([[1, 2, 3, 4, 5], [2, 6, 7, 8, 9], [10, 11, 12, 2, 13]]) prompt_mask = torch.ones_like(prompt_ids) protected_tokens = [2] target_length = 3 new_ids, new_mask = truncate_with_protected_tokens(prompt_ids, prompt_mask, target_length, protected_tokens) expected_ids = torch.tensor( [ [2, 4, 5], # 2 is protected, keep last 2 non-protected (4,5) [2, 8, 9], # 2 is protected, keep last 2 non-protected (8,9) [12, 2, 13], # 2 is protected, keep last 2 non-protected (12,13) ] ) self.assertTrue(torch.equal(new_ids, expected_ids)) def test_order_preservation(self): """Test that relative order is preserved.""" prompt_ids = torch.tensor([[10, 2, 20, 3, 30, 40]]) prompt_mask = torch.ones_like(prompt_ids) protected_tokens = [2, 3] target_length = 4 new_ids, new_mask = truncate_with_protected_tokens(prompt_ids, prompt_mask, target_length, protected_tokens) # Should keep protected tokens 2,3 and last 2 non-protected tokens 30,40 # Order should be: 2, 3, 30, 40 (maintaining original relative positions) expected_ids = torch.tensor([[2, 3, 30, 40]]) self.assertTrue(torch.equal(new_ids, expected_ids)) def test_empty_protected_tokens_list(self): """Test with empty protected tokens list.""" prompt_ids = torch.tensor([[1, 2, 3, 4, 5]]) prompt_mask = torch.ones_like(prompt_ids) protected_tokens = [] target_length = 2 new_ids, new_mask = truncate_with_protected_tokens(prompt_ids, prompt_mask, target_length, protected_tokens) expected_ids = torch.tensor([[4, 5]]) # Last 2 tokens self.assertTrue(torch.equal(new_ids, expected_ids)) class GetHighEntropyMaskTester(TrlTestCase): def get_high_entropy_mask(self, entropies, mask, threshold): """Helper method to test the get_high_entropy_mask functionality.""" # Create a mock trainer with minimal setup from unittest.mock import Mock # Create a mock accelerator mock_accelerator = Mock() mock_accelerator.num_processes = 1 # Single process for testing # Create a minimal trainer instance just to access the method trainer = Mock(spec=GRPOTrainer) trainer.accelerator = mock_accelerator trainer.accelerator.gather = lambda x: x # Mock gather to return the input directly # Call the actual method from GRPOTrainer return GRPOTrainer.get_high_entropy_mask(trainer, entropies, mask, threshold) def test_compute_entropy_mask_0(self): # We have a total of 12 tokens out of which 10 are non-pad. # for a top_entropy_quantile of 0.8, we expect the top 20% i.e 2 non-pad tokens corresponding to # the highest entropy to be unmasked. # In our example these will be the tokens corresponding to the entropies 0.9 and 1.0 since 1.1 and 1.2 are pad # tokens they are excluded from the entropy threshold calculation. entropies = torch.tensor([[0.1, 0.2, 0.3, 0.4, 0.5, 0.6], [0.7, 0.8, 0.9, 1.0, 1.1, 1.2]]) mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 0, 0]]) entropy_mask = self.get_high_entropy_mask(entropies, mask, threshold=0.8) expected_mask = torch.tensor([[0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 0, 0]], dtype=torch.bool) torch.testing.assert_close(entropy_mask, expected_mask) def test_compute_entropy_mask_1(self): # Another example with a different set of entropies and a different mask. entropies = torch.tensor([[0.1, 0.2, 0.3, 1.4, 0.5, 0.14], [0.5, 0.6, 0.7, 0.8, 0.9, 1.0]]) mask = torch.tensor([[1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 0, 0]]) entropy_mask = self.get_high_entropy_mask(entropies, mask, threshold=0.8) expected_mask = torch.tensor([[0, 0, 0, 1, 0, 0], [0, 0, 0, 1, 0, 0]], dtype=torch.bool) torch.testing.assert_close(entropy_mask, expected_mask) def test_compute_entropy_mask_lower_threshold(self): # For a threshold of 0.5 we expect the top half of the non-pad tokens to be unmasked. entropies = torch.tensor([[0.1, 0.2, 0.3, 0.4, 0.5, 0.6], [0.7, 0.8, 0.9, 1.0, 1.1, 1.2]]) mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 0, 0]]) entropy_mask = self.get_high_entropy_mask(entropies, mask, threshold=0.5) expected_mask = torch.tensor([[0, 0, 0, 0, 0, 1], [1, 1, 1, 1, 0, 0]], dtype=torch.bool) torch.testing.assert_close(entropy_mask, expected_mask) def test_compute_entropy_threshold_0(self): # If the threshold is 0.0 then we expect the mask to be all ones for non-pad tokens. entropies = torch.tensor([[0.1, 0.2, 0.3, 0.4, 0.5, 0.6], [0.7, 0.8, 0.9, 1.0, 1.1, 1.2]]) mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 0, 0]]) entropy_mask = self.get_high_entropy_mask(entropies, mask, threshold=0.0) expected_mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 0, 0]], dtype=torch.bool) torch.testing.assert_close(entropy_mask, expected_mask) def test_compute_entropy_threshold_1(self): # If the threshold is 1.0 then we expect the mask to be all zeros BUT ONE VALUE. entropies = torch.tensor([[0.1, 0.2, 0.3, 0.4, 0.5, 0.6], [0.7, 0.8, 0.9, 1.0, 1.1, 1.2]]) mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 0, 0]]) entropy_mask = self.get_high_entropy_mask(entropies, mask, threshold=1.0) expected_mask = torch.tensor([[0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]], dtype=torch.bool) torch.testing.assert_close(entropy_mask, expected_mask) def test_compute_entropy_all_masked(self): # If there are no non-pad tokens we expect the mask to be all zeros. entropies = torch.tensor([[0.1, 0.2, 0.3, 0.4, 0.5, 0.6], [0.7, 0.8, 0.9, 1.0, 1.1, 1.2]]) mask = torch.tensor([[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]) entropy_mask = self.get_high_entropy_mask(entropies, mask, threshold=0.5) expected_mask = torch.tensor([[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]], dtype=torch.bool) torch.testing.assert_close(entropy_mask, expected_mask) class SplitPixelValuesByGridTester(TrlTestCase): def test_split_correctly_0(self): batch = { "image_grid_thw": torch.tensor([[1, 2, 2], [1, 1, 4]]), # Products: [4, 4] "pixel_values": torch.arange(8 * 3).reshape(8, 3), # Shape: [8, 3] } result = split_pixel_values_by_grid(batch) self.assertIsInstance(result["pixel_values"], list) self.assertEqual(len(result["pixel_values"]), 2) self.assertTrue(torch.equal(result["pixel_values"][0], batch["pixel_values"][:4])) self.assertTrue(torch.equal(result["pixel_values"][1], batch["pixel_values"][4:])) def test_split_correctly_1(self): batch = { "image_grid_thw": torch.tensor([[1, 2, 2], [1, 2, 4]]), # Products: [4, 8] "pixel_values": torch.arange(12 * 3).reshape(12, 3), # Shape: [12, 3] } result = split_pixel_values_by_grid(batch) self.assertIsInstance(result["pixel_values"], list) self.assertEqual(len(result["pixel_values"]), 2) self.assertTrue(torch.equal(result["pixel_values"][0], batch["pixel_values"][:4])) self.assertTrue(torch.equal(result["pixel_values"][1], batch["pixel_values"][4:12])) def test_missing_keys(self): batch = {"pixel_values": torch.tensor([1.0])} result = split_pixel_values_by_grid(batch) self.assertEqual(result, batch) def test_mismatched_length(self): batch = { "image_grid_thw": torch.tensor([[2, 1, 1], [2, 1, 1]]), # Total = 4 "pixel_values": torch.randn(3, 5), # Only 3 rows } with self.assertRaises(ValueError): split_pixel_values_by_grid(batch) class UnsplitPixelValuesByGridTester(TrlTestCase): def test_unsplit_correctly(self): split = [torch.randn(4, 5), torch.randn(2, 5)] merged = torch.cat(split, dim=0) batch = {"pixel_values": split, "other_key": torch.tensor([1])} result = unsplit_pixel_values_by_grid(batch) self.assertIsInstance(result["pixel_values"], torch.Tensor) self.assertTrue(torch.allclose(result["pixel_values"], merged)) self.assertIn("other_key", result) def test_no_op_if_not_list(self): original = torch.randn(5, 3) batch = {"pixel_values": original} result = unsplit_pixel_values_by_grid(batch) self.assertTrue(torch.equal(result["pixel_values"], original)) class GRPOTrainerTester(TrlTestCase): def test_init_minimal(self): # Test that GRPOTrainer can be instantiated with only model, reward_model and train_dataset dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", train_dataset=dataset, ) @parameterized.expand([("standard_prompt_only",), ("conversational_prompt_only",)]) def test_training(self, config_name): dataset = load_dataset("trl-internal-testing/zen", config_name, split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") @parameterized.expand([("bnpo",), ("dr_grpo",), ("dapo",)]) def test_training_loss_types(self, loss_type): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=32, # reduce the completion length to reduce memory usage gradient_accumulation_steps=2, # set to 2 to test than DAPO can operate with accumulated batch loss_type=loss_type, report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_with_eval(self): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only") training_args = GRPOConfig( output_dir=self.tmp_dir, per_device_train_batch_size=3, # reduce the batch size to reduce memory usage per_device_eval_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage eval_strategy="steps", eval_steps=2, report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset["train"], eval_dataset=dataset["test"], ) trainer.train() def test_training_multiple_iterations(self): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage num_iterations=2, report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") @require_peft def test_training_peft(self): model = AutoModelForCausalLM.from_pretrained("trl-internal-testing/tiny-Qwen2ForCausalLM-2.5") base_param_names = [f"base_model.model.{n}" for n, _ in model.named_parameters()] dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", ) trainer = GRPOTrainer( model=model, reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, peft_config=LoraConfig(), ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the peft params have changed and the base model params have not changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) if n in base_param_names: # We expect the base model params to be the same self.assertTrue(torch.allclose(param, new_param), f"Parameter {n} has changed.") elif "base_layer" not in n: # We expect the peft params to be different (except for the base layer) self.assertFalse(torch.allclose(param, new_param), f"Parameter {n} has not changed.") @require_peft def test_training_peft_with_gradient_checkpointing(self): """Test that training works with PEFT and gradient checkpointing enabled.""" dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") model = AutoModelForCausalLM.from_pretrained( "trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", torch_dtype=torch.float32, # Use float32 for testing to avoid precision issues ) lora_config = LoraConfig( r=8, lora_alpha=32, target_modules=["q_proj", "v_proj"], lora_dropout=0.05, bias="none" ) training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, per_device_train_batch_size=3, num_generations=3, max_completion_length=8, gradient_checkpointing=True, # Enable gradient checkpointing report_to="none", ) trainer = GRPOTrainer( model=model, reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, peft_config=lora_config, ) # Verify gradient checkpointing is enabled self.assertIsInstance(trainer.model, PeftModel) # Store initial parameters to check which ones change previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that only LoRA parameters have changed, base model parameters remain unchanged for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) if "lora" in n.lower(): # LoRA parameters should change self.assertFalse(torch.equal(param, new_param), f"LoRA parameter {n} has not changed.") else: # Base model parameters should not change self.assertTrue(torch.equal(param, new_param), f"Base parameter {n} has changed.") def test_training_different_reward_model(self): # Use a reward model different from the model: different chat template, tokenization, etc. dataset = load_dataset("trl-internal-testing/zen", "conversational_prompt_only", split="train") reward_model_id = "trl-internal-testing/tiny-LlamaForSequenceClassification-3.2" reward_model = AutoModelForSequenceClassification.from_pretrained(reward_model_id) reward_tokenizer = AutoTokenizer.from_pretrained(reward_model_id) # By default, the trainer uses the eos token as the padding token. However, for Llama models, the eos token # appears in the chat template. Using it as a pad token disrupts the reward calculation, as the calculation # considers the score of the last token before the first pad token. To ensure correct reward calculations, # we use a separate pad token instead. reward_tokenizer.pad_token = "<|finetune_right_pad_id|>" training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs=reward_model, args=training_args, train_dataset=dataset, reward_processing_classes=reward_tokenizer, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_reward_func_standard(self): # Test if trainer can handle reward function with standard format dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") def reward_func(completions, **kwargs): """Reward function that rewards longer completions.""" return [float(len(completion)) for completion in completions] training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs=reward_func, args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_reward_func_conversational(self): # Test if trainer can handle reward function with conversational format dataset = load_dataset("trl-internal-testing/zen", "conversational_prompt_only", split="train") def reward_func(completions, **kwargs): """Reward function that gives higher scores to longer completion content.""" completion_contents = [completion[0]["content"] for completion in completions] return [float(len(content)) for content in completion_contents] training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs=reward_func, args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_multiple_reward_funcs(self): # Test that GRPOTrainer can be instantiated with multiple reward functions dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") def reward_func1(completions, **kwargs): """Reward function that rewards longer completions.""" return [float(len(completion)) for completion in completions] def reward_func2(completions, **kwargs): """Reward function that rewards completions with more unique letters.""" return [float(len(set(completion))) for completion in completions] training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs=[reward_func1, reward_func2], args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_multiple_reward_funcs_with_None_output(self): """Test that a valid math reward function is processed correctly while the code reward function returns None.""" dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") def applicable_reward_func(completions, **kwargs): """A reward function that rewards longer completions.""" return [float(len(completion)) for completion in completions] def non_applicable_reward_func(completions, **kwargs): """A reward function that returns None for all inputs, as it is not applicable to this sample.""" return [None] * len(completions) training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, per_device_train_batch_size=3, num_generations=3, max_completion_length=8, report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs=[ applicable_reward_func, non_applicable_reward_func, ], # One applicable, one non applicable args=training_args, train_dataset=dataset, ) previous_trainable_params = { n: param.clone() for n, param in trainer.model.named_parameters() if param.requires_grad } trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_multiple_reward_funcs_with_weights(self): """Test that GRPOTrainer can handle multiple reward functions with weights.""" dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") def reward_func1(completions, **kwargs): """Reward function that rewards longer completions.""" return [float(len(completion)) for completion in completions] def reward_func2(completions, **kwargs): """Reward function that rewards completions with more unique letters.""" return [float(len(set(completion))) for completion in completions] training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", reward_weights=[0.7, 0.3], # weight of reward_func1 and reward_func2 respectively ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs=[reward_func1, reward_func2], args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() # Check that training logs contain both reward metrics self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) self.assertIn("rewards/reward_func1/mean", trainer.state.log_history[-1]) self.assertIn("rewards/reward_func1/std", trainer.state.log_history[-1]) self.assertIn("rewards/reward_func2/mean", trainer.state.log_history[-1]) self.assertIn("rewards/reward_func2/std", trainer.state.log_history[-1]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_multiple_mixed_reward_funcs(self): # Test if the trainer can handle a mix of reward functions and reward models dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") def reward_func(completions, **kwargs): """Reward function that rewards longer completions.""" return [float(len(completion)) for completion in completions] training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs=[reward_func, "trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5"], args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_reward_func_additional_column(self): # Test if trainer can handle reward function that rely on additional columns in the dataset dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") # Add a column to the dataset (dummy example, the column could be anything) some_values = list(range(len(dataset))) dataset = dataset.add_column("some_values", some_values) def reward_func(completions, some_values, **kwargs): """Reward function that rewards completions with lengths closer to the values in some_values.""" return [float(abs(len(completion) - value)) for completion, value in zip(completions, some_values)] training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs=reward_func, args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_with_sync_ref_model(self): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage sync_ref_model=True, ref_model_sync_steps=2, # reduce sync steps to ensure a sync happens report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_beta_non_zero(self): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, beta=0.1, # set beta to non-zero value to test the case where the reference model is used learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_with_entropy_filter(self): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, beta=0.1, # set beta to non-zero value to test the case where the reference model is used learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", top_entropy_quantile=0.2, ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") @unittest.skip("We should add a mock for the vLLM server.") @require_peft @require_vllm def test_training_vllm_and_peft(self): """Test that training works with vLLM for generation.""" model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct") # tiny model is too small for vLLM base_param_names = [f"base_model.model.{n}" for n, _ in model.named_parameters()] dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", use_vllm=True, ) lora_config = LoraConfig( target_modules="all-linear", # test with non-default modules as it adds extra keys in state_dict that we need to handle modules_to_save=["embed_tokens", "lm_head"], ) trainer = GRPOTrainer( model=model, reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, peft_config=lora_config, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the peft params have changed and the base model params have not changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) if n in base_param_names: # We expect the base model params to be the same self.assertTrue(torch.allclose(param, new_param), f"Parameter {n} has changed.") elif "base_layer" not in n and "original_module" not in n: # We expect the peft params to be different (except for the base layer) self.assertFalse(torch.allclose(param, new_param), f"Parameter {n} has not changed.") @require_vllm @unittest.skip("We should add a mock for the vLLM server.") def test_training_vllm_guided_decoding(self): """Test that training works with vLLM for generation with guided decoding.""" dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", use_vllm=True, vllm_guided_decoding_regex=r"<reasoning>\n.*\n</reasoning>\n<answer>\n.*\n</answer>", ) trainer = GRPOTrainer( model="Qwen/Qwen2.5-0.5B-Instruct", # tiny model is too small for vLLM reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_with_additional_generation_kwargs(self): """Test that training works with additional generation kwargs.""" dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", top_p=0.9, top_k=10, min_p=0.01, repetition_penalty=1.1, ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") @require_vllm @unittest.skip("We should add a mock for the vLLM server.") def test_training_vllm_with_additional_generation_kwargs(self): """Test that training works with vLLM and additional generation kwargs.""" dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", use_vllm=True, top_p=0.9, top_k=10, min_p=0.01, repetition_penalty=1.1, ) trainer = GRPOTrainer( model="Qwen/Qwen2.5-0.5B-Instruct", # tiny model is too small for vLLM reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") @parameterized.expand([(False,), ("group",), ("batch",), (True,), ("none",)]) def test_training_scale_rewards(self, scale_rewards): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage scale_rewards=scale_rewards, report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") @patch("transformers.generation.utils.GenerationMixin.generate") def test_training_with_mask_truncated_completions(self, mock_generate): """Test that training works with mask_truncated_completions=True parameter.""" # We mock the generate method because the model's random weights make it extremely unlikely to produce a # sequence containing the EOS token within the allowed max_completion_length. As a result, all tokens are # masked in the loss, the model doesn't update, and the final check (which verifies the update) fails. def fake_generate(input_ids, **kwargs): # pad_token_id = 151643; eos_token_id = 151645 completions_ids = torch.tensor( [ [1, 2, 3, 4, 5, 6, 7, 8], # this one is truncated [9, 10, 11, 151645, 151643, 151643, 151643, 151643], # this one contains eos [12, 13, 14, 15, 16, 17, 18, 151645], # particular case, eos is generated just within the limit ], device=input_ids.device, ) return torch.cat([input_ids, completions_ids], dim=1) mock_generate.side_effect = fake_generate dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage mask_truncated_completions=True, # Enable masking of truncated completions report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_with_mask_truncated_completions_all_masked(self): """ Test that when all generated completions are truncated (i.e., none contain an EOS token), and mask_truncated_completions=True, the model receives no effective learning signal and therefore does not update its parameters. Here, we don't mock the generate method, be we rely on the fact that the model the probability of generating the EOS token is extremely low, so all generated completions are truncated. """ dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage mask_truncated_completions=True, # Enable masking of truncated completions report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertTrue(torch.equal(param, new_param), f"Parameter {n} has changed.") def test_warning_raised_all_rewards_none(self): """Test that a proper warning is raised when all rewards are None.""" dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") def always_none_reward_func(completions, **kwargs): """Reward function that always returns None.""" return [None] * len(completions) training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs=always_none_reward_func, args=training_args, train_dataset=dataset, ) with self.assertLogs("trl.trainer.grpo_trainer", level="WARNING") as cm: trainer.train() expected_warning = "All reward functions returned None for the following kwargs:" self.assertIn(expected_warning, cm.output[0]) def test_training_num_generations_larger_than_batch_size(self): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage num_generations=6, # the number of generations is larger than the batch size, but gradient_accumulation_steps=2, # gradient accumulation should allow that report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_delta_clipping(self): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage delta=2.0, # set delta to a non-None value report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_multiple_dataloader_workers(self): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage dataloader_num_workers=2, # use multiple dataloader workers report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_with_generation_kwargs(self): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage generation_kwargs={"do_sample": True, "top_k": 50, "length_penalty": -0.1}, # Add some gen kwargs report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_with_reward_func_accessing_trainer_state(self): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") def reward_func(completions, **kwargs): trainer_state = kwargs.get("trainer_state") assert trainer_state is not None # transformers.TrainerState instance should have a `global_step` property. assert hasattr(trainer_state, "global_step") return [float(len(set(completion))) for completion in completions] training_args = GRPOConfig( output_dir=self.tmp_dir, per_device_train_batch_size=2, num_generations=2, max_completion_length=8, report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs=reward_func, args=training_args, train_dataset=dataset, ) trainer.train() def test_prepare_input_called_with_correct_data(self): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test max_completion_length=8, # reduce the completion length to reduce memory usage gradient_accumulation_steps=3, # can be anything in this test # steps_per_generation*per_device_train_batch_size=24 is divisible by num_generations=4 steps_per_generation=4, num_generations=4, per_device_train_batch_size=6, # reduce the batch size to reduce memory usage num_iterations=2, shuffle_dataset=False, report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) # steps_per_generation=4, per_device_train_batch_size=6 and num_generations=4, so we expect a # generation batch of 24 samples (steps_per_generation * per_device_train_batch_size), containing 6 # different prompts (steps_per_generation * per_device_train_batch_size // num_generations), each repeated # 4 times (num_generations). expected_first_generation_batch = ( [{"prompt": "Beautiful is better than"}] * 4 + [{"prompt": "Explicit is"}] * 4 + [{"prompt": "Simple is better"}] * 4 + [{"prompt": "Complex"}] * 4 + [{"prompt": "Flat is better than"}] * 4 + [{"prompt": "Sparse is better"}] * 4 ) expected_second_generation_batch = ( [{"prompt": "Readability"}] * 4 + [{"prompt": "Special cases aren't special"}] * 4 + [{"prompt": "Although practicality beats"}] * 4 + [{"prompt": "Errors should never"}] * 4 + [{"prompt": "Unless explicitly"}] * 4 + [{"prompt": "In the face of ambiguity, refuse"}] * 4 ) with patch.object(GRPOTrainer, "training_step", wraps=trainer.training_step) as mock_prepare: trainer.train() # 3 epochs * 2 iterations * 2 generation batches to cover the dataset * 4 steps_per_generation self.assertEqual(mock_prepare.call_count, 48) for i in range(0, 8): # Generation batch repeated 8 times (steps_per_generation*num_iterations) assert mock_prepare.call_args_list[i].args[1] == expected_first_generation_batch for i in range(8, 16): assert mock_prepare.call_args_list[i].args[1] == expected_second_generation_batch @parameterized.expand( [ ("trl-internal-testing/tiny-Gemma3ForConditionalGeneration",), ("trl-internal-testing/tiny-LlavaNextForConditionalGeneration",), ("trl-internal-testing/tiny-Qwen2_5_VLForConditionalGeneration",), ("trl-internal-testing/tiny-Qwen2VLForConditionalGeneration",), # ("trl-internal-testing/tiny-SmolVLMForConditionalGeneration",), seems not to support bf16 properly ] ) @require_vision def test_training_vlm(self, model_id): dataset = load_dataset("trl-internal-testing/zen-image", "conversational_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage max_prompt_length=None, # disable prompt truncation, because usually, models don't support it report_to="none", ) trainer = GRPOTrainer( model=model_id, reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed # Because of the way the tiny models are initialized, the gradient does not flow properly through the # vision parts of the model, so we skip them. Ideally, we should fix the init of these models. params_to_skip = ( "model.vision_tower.", "model.multi_modal_projector.", "model.vision_model.", "model.connector.modality_projection.", "model.visual.", "model.image_newline", ) for n, param in previous_trainable_params.items(): if n.startswith(params_to_skip): continue new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") @require_vision def test_training_vlm_beta_non_zero(self): dataset = load_dataset("trl-internal-testing/zen-image", "conversational_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, beta=0.1, # set beta to non-zero value to test the case where the reference model is used learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2_5_VLForConditionalGeneration", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed # Because of the way the tiny models are initialized, the gradient does not flow properly through the # vision parts of the model, so we skip them. Ideally, we should fix the init of these models. params_to_skip = ("model.visual.",) for n, param in previous_trainable_params.items(): if n.startswith(params_to_skip): continue new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") @require_vision @require_peft def test_training_vlm_peft(self): model = AutoModelForImageTextToText.from_pretrained( "trl-internal-testing/tiny-Qwen2_5_VLForConditionalGeneration" ) base_param_names = [f"base_model.model.{n}" for n, _ in model.named_parameters()] dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage report_to="none", ) trainer = GRPOTrainer( model=model, reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, peft_config=LoraConfig(target_modules=["q_proj", "v_proj"]), ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the peft params have changed and the base model params have not changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) if n in base_param_names: # We expect the base model params to be the same self.assertTrue(torch.allclose(param, new_param), f"Parameter {n} has changed.") elif "base_layer" not in n: # We expect the peft params to be different (except for the base layer) self.assertFalse(torch.allclose(param, new_param), f"Parameter {n} has not changed.") @require_vision def test_training_vlm_and_importance_sampling(self): dataset = load_dataset("trl-internal-testing/zen-image", "conversational_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage steps_per_generation=2, # increase the steps per generation to trigger IS report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2_5_VLForConditionalGeneration", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed # Because of the way the tiny models are initialized, the gradient does not flow properly through the # vision parts of the model, so we skip them. Ideally, we should fix the init of these models. params_to_skip = ("model.visual.",) for n, param in previous_trainable_params.items(): if n.startswith(params_to_skip): continue new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") @require_vision @require_liger_kernel def test_training_vlm_and_liger(self): dataset = load_dataset("trl-internal-testing/zen-image", "conversational_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage use_liger_loss=True, # enable Liger loss loss_type="bnpo", # default dapo is not supported yet report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2_5_VLForConditionalGeneration", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed # Because of the way the tiny models are initialized, the gradient does not flow properly through the # vision parts of the model, so we skip them. Ideally, we should fix the init of these models. params_to_skip = ("model.visual.",) for n, param in previous_trainable_params.items(): if n.startswith(params_to_skip): continue new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") @require_vision def test_training_vlm_and_prompt_truncation(self): # If not handled properly, prompt truncation may truncate image token dataset = load_dataset("trl-internal-testing/zen-image", "conversational_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage max_prompt_length=18, report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2_5_VLForConditionalGeneration", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed # Because of the way the tiny models are initialized, the gradient does not flow properly through the # vision parts of the model, so we skip them. Ideally, we should fix the init of these models. params_to_skip = ("model.visual.",) for n, param in previous_trainable_params.items(): if n.startswith(params_to_skip): continue new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") @require_vision @require_vllm @parameterized.expand( [ ("trl-internal-testing/tiny-Qwen2_5_VLForConditionalGeneration",), ("trl-internal-testing/tiny-Gemma3ForConditionalGeneration",), ] ) @unittest.skip("We should add a mock for the vLLM server.") def test_training_vlm_and_vllm(self, model_id) -> None: dataset = load_dataset("trl-internal-testing/zen-image", "conversational_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, per_device_train_batch_size=3, num_generations=3, max_completion_length=8, max_prompt_length=18, report_to="none", use_vllm=True, vllm_mode="server", ) trainer = GRPOTrainer( model=model_id, reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_training_sequence_importance_sampling(self): dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") training_args = GRPOConfig( output_dir=self.tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=8, # reduce the completion length to reduce memory usage num_iterations=2, # the importance sampling weights won't be 0 in this case importance_sampling_level="sequence", report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", args=training_args, train_dataset=dataset, ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # Check that the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) self.assertFalse(torch.equal(param, new_param), f"Parameter {n} has not changed.") def test_mismatched_reward_processing_classes_length(self): """Test that mismatched length between reward_funcs and reward_processing_classes raises error.""" dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") # Use two reward models reward_models = [ "trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", "trl-internal-testing/tiny-Qwen3ForSequenceClassification", ] # Create a single processing class (tokenizer) single_processing_class = AutoTokenizer.from_pretrained( "trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5" ) training_args = GRPOConfig(output_dir=self.tmp_dir, report_to="none") with self.assertRaises(ValueError) as context: GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs=reward_models, reward_processing_classes=single_processing_class, # only one, but need two args=training_args, train_dataset=dataset, ) self.assertIn("must match", str(context.exception)) def test_correct_reward_processing_classes_list(self): """Test that correct list of reward_processing_classes works properly.""" dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") # Use two reward models reward_models = [ "trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5", "trl-internal-testing/tiny-Qwen3ForSequenceClassification", ] # Create processing classes processing_class1 = AutoTokenizer.from_pretrained( "trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5" ) processing_class2 = AutoTokenizer.from_pretrained("trl-internal-testing/tiny-Qwen3ForSequenceClassification") training_args = GRPOConfig(output_dir=self.tmp_dir, report_to="none") # Correct list length should work correct_processing_classes = [processing_class1, processing_class2] trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs=reward_models, reward_processing_classes=correct_processing_classes, args=training_args, train_dataset=dataset, ) self.assertEqual(len(trainer.reward_processing_classes), len(reward_models)) def test_single_reward_model_with_single_processing_class(self): """Test that single reward model with single processing class works.""" dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") # Use single reward model reward_model = "trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5" # Create a single processing class (tokenizer) single_processing_class = AutoTokenizer.from_pretrained( "trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5" ) training_args = GRPOConfig(output_dir=self.tmp_dir, report_to="none") trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs=reward_model, reward_processing_classes=single_processing_class, # single object for single reward model args=training_args, train_dataset=dataset, ) self.assertEqual(len(trainer.reward_processing_classes), 1) self.assertEqual(trainer.reward_processing_classes[0], single_processing_class) if __name__ == "__main__": unittest.main()
trl/tests/test_grpo_trainer.py/0
{ "file_path": "trl/tests/test_grpo_trainer.py", "repo_id": "trl", "token_count": 40987 }
629
# Copyright 2020-2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import gc from collections.abc import Mapping from contextlib import contextmanager from typing import Optional, Union import numpy as np import torch from accelerate import logging from transformers import is_torch_npu_available, is_torch_xpu_available logger = logging.get_logger(__name__) def flatten_dict(nested: dict, sep: str = "/") -> dict: """Flatten dictionary and concatenate nested keys with separator.""" def recurse(nest: dict, prefix: str, into: dict) -> None: for k, v in nest.items(): if sep in k: raise ValueError(f"separator '{sep}' not allowed to be in key '{k}'") if isinstance(v, Mapping): recurse(v, prefix + k + sep, into) else: into[prefix + k] = v flat = {} recurse(nested, "", flat) return flat def masked_mean(values: torch.Tensor, mask: torch.Tensor, axis: Optional[bool] = None) -> torch.Tensor: """Compute mean of tensor with a masked values.""" if axis is not None: return (values * mask).sum(axis=axis) / mask.sum(axis=axis) else: return (values * mask).sum() / mask.sum() def masked_var(values: torch.Tensor, mask: torch.Tensor, unbiased: bool = True) -> torch.Tensor: """Compute variance of tensor with masked values.""" mean = masked_mean(values, mask) centered_values = values - mean variance = masked_mean(centered_values**2, mask) if unbiased: mask_sum = mask.sum() if mask_sum == 0: raise ValueError( "The sum of the mask is zero, which can happen when `mini_batch_size=1`;" "try increase the `mini_batch_size` or `gradient_accumulation_steps`" ) # note that if mask_sum == 1, then there is a division by zero issue # to avoid it you just need to use a larger minibatch_size bessel_correction = mask_sum / (mask_sum - 1) variance = variance * bessel_correction return variance def masked_whiten(values: torch.Tensor, mask: torch.Tensor, shift_mean: bool = True) -> torch.Tensor: """Whiten values with masked values.""" mean, var = masked_mean(values, mask), masked_var(values, mask) whitened = (values - mean) * torch.rsqrt(var + 1e-8) if not shift_mean: whitened += mean return whitened class LengthSampler: """ Samples a length """ def __init__(self, min_value: int, max_value: int): self.values = list(range(min_value, max_value)) def __call__(self) -> int: return np.random.choice(self.values) class PPODecorators: optimize_device_cache = False @classmethod @contextmanager def empty_device_cache(cls): yield if cls.optimize_device_cache: if is_torch_xpu_available(): gc.collect() torch.xpu.empty_cache() gc.collect() elif is_torch_npu_available(): gc.collect() torch.npu.empty_cache() gc.collect() elif torch.cuda.is_available(): gc.collect() torch.cuda.empty_cache() gc.collect() def randn_tensor( shape: Union[tuple, list], generator: Optional[Union[list[torch.Generator], torch.Generator]] = None, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None, layout: Optional[torch.layout] = None, ) -> torch.Tensor: """A helper function to create random tensors on the desired `device` with the desired `dtype`. When passing a list of generators, you can seed each batch size individually. If CPU generators are passed, the tensor is always created on the CPU. """ # device on which tensor is created defaults to device rand_device = device batch_size = shape[0] layout = layout or torch.strided device = device or torch.device("cpu") if generator is not None: gen_device_type = generator.device.type if not isinstance(generator, list) else generator[0].device.type if gen_device_type != device.type and gen_device_type == "cpu": rand_device = "cpu" if device != "mps": logger.warning( f"The passed generator was created on 'cpu' even though a tensor on {device} was expected." f" Tensors will be created on 'cpu' and then moved to {device}. Note that one can probably" f" slightly speed up this function by passing a generator that was created on the {device} " "device.", ) elif gen_device_type != device.type and gen_device_type == "cuda": raise ValueError(f"Cannot generate a {device} tensor from a generator of type {gen_device_type}.") # make sure generator list of length 1 is treated like a non-list if isinstance(generator, list) and len(generator) == 1: generator = generator[0] if isinstance(generator, list): shape = (1,) + shape[1:] latents = [ torch.randn(shape, generator=generator[i], device=rand_device, dtype=dtype, layout=layout) for i in range(batch_size) ] latents = torch.cat(latents, dim=0).to(device) else: latents = torch.randn(shape, generator=generator, device=rand_device, dtype=dtype, layout=layout).to(device) return latents
trl/trl/core.py/0
{ "file_path": "trl/trl/core.py", "repo_id": "trl", "token_count": 2391 }
630
# Copyright 2020-2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import dataclasses import itertools import warnings from contextlib import contextmanager from copy import deepcopy from dataclasses import dataclass from typing import TYPE_CHECKING, Any, Literal, Optional, Union import torch import torch.nn as nn from accelerate.utils import is_peft_model from packaging import version from transformers import AddedToken, AutoTokenizer, PreTrainedModel, PreTrainedTokenizer, TrainingArguments from transformers.utils import is_peft_available from .modeling_value_head import AutoModelForCausalLMWithValueHead, AutoModelForSeq2SeqLMWithValueHead if is_peft_available(): import peft from peft import PeftConfig, PeftModel, get_peft_model, prepare_model_for_kbit_training if TYPE_CHECKING: from accelerate import Accelerator from deepspeed.runtime.engine import DeepSpeedEngine from torch.nn import Module from torch.nn.parallel.distributed import DistributedDataParallel SUPPORTED_ARCHITECTURES = ( AutoModelForCausalLMWithValueHead, AutoModelForSeq2SeqLMWithValueHead, ) # TODO: Add Abstract Base Class if more formats are added @dataclass class ChatMlSpecialTokens: """Dataclass for special tokens used in ChatML, including system, user, assistant, bos, eos, and pad tokens.""" bos_token: str = "<|im_start|>" eos_token: str = "<|im_end|>" pad_token: str = "<|im_end|>" @property def system(self): return f"{self.bos_token}system" @property def user(self): return f"{self.bos_token}user" @property def assistant(self): return f"{self.bos_token}assistant" @property def chat_template(self): return ( "{% for message in messages %}" f"{{{{'{self.bos_token}' + message['role'] + '\n' + message['content'] + '{self.eos_token}' + '\n'}}}}" "{% endfor %}" "{% if add_generation_prompt %}" f"{{{{ '{self.assistant}\n' }}}}" "{% endif %}" ) FORMAT_MAPPING = {"chatml": ChatMlSpecialTokens} def setup_chat_format( model: PreTrainedModel, tokenizer: PreTrainedTokenizer, format: Optional[Literal["chatml"]] = "chatml", resize_to_multiple_of: Optional[int] = None, ) -> tuple[PreTrainedModel, PreTrainedTokenizer]: """ Setup chat format by adding special tokens to the tokenizer, setting the correct format, and extending the embedding layer of the model based on the new special tokens. <Tip warning="true"> This function is deprecated and will be removed in version 0.26.0. Please use [`clone_chat_template`] instead. </Tip> If the model already has a chat template, this will throw an error. If you want to overwrite it, please set `tokenizer.chat_template` to `None`. Args: model (`~transformers.PreTrainedModel`): The model to be modified. tokenizer (`~transformers.PreTrainedTokenizer`): The tokenizer to be modified. format (`Optional[Literal["chatml"]]`): The format to be set. Defaults to "chatml". resize_to_multiple_of (`int` or `None`): Number to resize the embedding layer to. Defaults to None. Returns: model (`~transformers.PreTrainedModel`): The modified model. tokenizer (`~transformers.PreTrainedTokenizer`): The modified tokenizer. """ warnings.warn( "The `setup_chat_format` function is deprecated and will be removed in version 0.26.0. Please use " "`clone_chat_template` instead.", DeprecationWarning, ) # check if model already had a chat template if tokenizer.chat_template is not None: raise ValueError( "Chat template is already added to the tokenizer. If you want to overwrite it, please set it to None" ) # check if format available and retrieve if format not in FORMAT_MAPPING: raise ValueError(f"Format {format} not available. Please use one of {FORMAT_MAPPING.keys()}") chat_format = FORMAT_MAPPING[format]() # set special tokens and them tokenizer.eos_token = chat_format.eos_token tokenizer.pad_token = chat_format.pad_token tokenizer.bos_token = chat_format.bos_token tokenizer.add_special_tokens({"additional_special_tokens": [chat_format.bos_token, chat_format.eos_token]}) # set chat format for tokenizer tokenizer.chat_template = chat_format.chat_template # resize embedding layer to a multiple of 64, https://x.com/karpathy/status/1621578354024677377 model.resize_token_embeddings( # After studying many tokenizers, we found that len(tokenizer.vocab) is the most reliable way to get the vocab # size. Avoid using tokenizer.vocab_size or tokenizer.vocab_size + len(tokenizer.added_tokens_encoder), # as handling of special and added tokens varies across tokenizers. new_num_tokens=len(tokenizer.vocab), pad_to_multiple_of=resize_to_multiple_of if resize_to_multiple_of is not None else None, ) # Update the model config to use the new eos & bos tokens if getattr(model, "config", None) is not None: model.config.pad_token_id = tokenizer.pad_token_id model.config.bos_token_id = tokenizer.bos_token_id model.config.eos_token_id = tokenizer.eos_token_id # Update the generation config to use the new eos & bos token if getattr(model, "generation_config", None) is not None: model.generation_config.bos_token_id = tokenizer.bos_token_id model.generation_config.eos_token_id = tokenizer.eos_token_id model.generation_config.pad_token_id = tokenizer.pad_token_id return model, tokenizer def clone_chat_template( model: PreTrainedModel, tokenizer: PreTrainedTokenizer, source_tokenizer_path: str, resize_to_multiple_of: Optional[int] = 64, ) -> tuple[PreTrainedModel, PreTrainedTokenizer, list[int]]: """ Clones a chat template from a source tokenizer to the target tokenizer and updates the model accordingly. This function: - Copies the chat template from a source tokenizer to the target tokenizer. - Adds any new tokens from the source tokenizer to the target tokenizer. - Sets and synchronizes the EOS token across the tokenizer and model. - Resizes the model's token embeddings to match the new vocabulary size, optionally rounding it up to a multiple of a specified value. In such cases, dummy tokens are added to the tokenizer to ensure the vocabulary size matches the embedding dimensions. Args: model (`PreTrainedModel`): Model to update. tokenizer (`PreTrainedTokenizer`): Tokenizer to update. source_tokenizer_path (`str`): Path or identifier of the pretrained tokenizer to clone from. resize_to_multiple_of (`int` or `None`, *optional*, defaults to `64`): The embedding layer will be resized to the new vocabulary size. If this is not `None`, it will round up the new vocabulary size to the nearest multiple of this value. Returns: model (`PreTrainedModel`): Updated model with resized token embeddings and EOS token configured. tokenizer (`~transformers.PreTrainedTokenizer`): Updated tokenizer with the chat template and special tokens applied. added_tokens (`list[int]`): List of tokens that were added to the tokenizer from the source tokenizer. Example: ```python from transformers import AutoModelForCausalLM, AutoTokenizer from trl import clone_chat_template model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B") tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B") model, tokenizer, added_tokens = clone_chat_template(model, tokenizer, "Qwen/Qwen3-0.6B") ``` """ # Load the source tokenizer containing the desired chat template tokenizer_source = AutoTokenizer.from_pretrained(source_tokenizer_path) # Copy the chat template from the source tokenizer tokenizer.chat_template = tokenizer_source.get_chat_template() # Ensure all added tokens from the source are available in the target tokenizer added_tokens = [ token for token in tokenizer_source.added_tokens_decoder.values() if token.content not in tokenizer.vocab ] tokenizer.add_tokens(added_tokens) # Set the EOS token from the source tokenizer (important for generation) tokenizer.eos_token = tokenizer_source.eos_token model.config.eos_token_id = tokenizer.eos_token_id model.generation_config.eos_token_id = tokenizer.eos_token_id # Resize model embeddings to include any new tokens, optionally rounding up to a multiple model.resize_token_embeddings( # After studying many tokenizers, we found that len(tokenizer.vocab) is the most reliable way to get the vocab # size. Avoid using tokenizer.vocab_size or tokenizer.vocab_size + len(tokenizer.added_tokens_encoder), # as handling of special and added tokens varies across tokenizers. new_num_tokens=len(tokenizer.vocab), pad_to_multiple_of=resize_to_multiple_of if resize_to_multiple_of is not None else None, ) # After resizing, the embedding matrix size may exceed the vocabulary size. Add dummy tokens to the tokenizer to # ensure vocabulary size matches the embedding matrix dimensions. idx = 0 while model.vocab_size > len(tokenizer.vocab): dummy_token = AddedToken(f"<extra_id_{idx}>") is_added = tokenizer.add_tokens(dummy_token) idx += 1 if is_added == 1: added_tokens.append(dummy_token) # Verify that vocabulary size now matches embedding dimensions if len(tokenizer.vocab) != model.vocab_size: raise RuntimeError( f"Vocabulary size mismatch after resizing: tokenizer vocab size is {len(tokenizer.vocab)}, but model " f"embedding size is {model.vocab_size}. This indicates an internal error in the token alignment process." ) added_tokens = [token.content for token in added_tokens] added_tokens = tokenizer.convert_tokens_to_ids(added_tokens) return model, tokenizer, added_tokens def remove_hooks(model: "DeepSpeedEngine") -> None: """Removes the optimizer hooks from a DeepSpeed ZeRO-3 model.""" if not hasattr(model, "optimizer"): # before the first training step, the model has no optimizer return if model.optimizer is not None and hasattr(model.optimizer, "parameter_offload"): optimizer_offload = model.optimizer.parameter_offload elif model.optimizer is not None: optimizer_offload = model.optimizer else: raise RuntimeError("The model optimizer is None, which is not yet supported.") for param in iter_params(optimizer_offload.module, recurse=True): param.ds_active_sub_modules.clear() for hook in optimizer_offload.forward_hooks: hook.remove() for hook in optimizer_offload.backward_hooks: hook.remove() optimizer_offload.forward_hooks = [] optimizer_offload.backward_hooks = [] def get_all_parameters(sub_module, recurse=False): return itertools.chain(sub_module.named_parameters(recurse=recurse), sub_module.ds_external_parameters()) def iter_params(module, recurse=False): return [param for _, param in get_all_parameters(module, recurse)] def add_hooks(model: "DeepSpeedEngine") -> None: """Adds the optimizer hooks from a DeepSpeed ZeRO-3 model.""" import deepspeed if not hasattr(model, "optimizer"): # before the first training step, the model has no optimizer return if model.optimizer is not None and hasattr(model.optimizer, "parameter_offload"): optimizer_offload = model.optimizer.parameter_offload elif model.optimizer is not None: optimizer_offload = model.optimizer else: raise RuntimeError("The model optimizer is None, which is not yet supported.") if version.parse(deepspeed.__version__) >= version.parse("0.16.4"): # Account for renaming in https://github.com/deepspeedai/DeepSpeed/pull/6847 optimizer_offload._register_deepspeed_module(optimizer_offload.module) else: optimizer_offload._register_hooks_recursively(optimizer_offload.module) @contextmanager def unwrap_model_for_generation( model: Union["DistributedDataParallel", "DeepSpeedEngine"], accelerator: "Accelerator", gather_deepspeed3_params: bool = True, ): """ Context manager to unwrap distributed or accelerated models for generation tasks. Args: model (`Union[DistributedDataParallel, DeepSpeedEngine]`): Model to be unwrapped. accelerator (`~accelerate.Accelerator`): Accelerator instance managing the model. gather_deepspeed3_params (`bool`, *optional*, defaults to `True`): Whether to gather weights for DeepSpeed ZeRO Stage 3 models. If `False`, skips parameter gathering, which can be more memory-efficient but may lead to slower generation times. Yields: Unwrapped model. Example: ```python with unwrap_model_for_generation(model, accelerator) as unwrapped_model: generated_outputs = unwrapped_model.generate(input_ids) ``` """ unwrapped_model = accelerator.unwrap_model(model) is_gradient_checkpointing = unwrapped_model.is_gradient_checkpointing if is_gradient_checkpointing: unwrapped_model.gradient_checkpointing_disable() if accelerator.state.deepspeed_plugin is not None and accelerator.state.deepspeed_plugin.zero_stage == 3: if not gather_deepspeed3_params: yield accelerator.unwrap_model(model) else: import deepspeed with deepspeed.zero.GatheredParameters(model.parameters()): remove_hooks(model) yield accelerator.unwrap_model(model) add_hooks(model) else: yield unwrapped_model if is_gradient_checkpointing: unwrapped_model.gradient_checkpointing_enable() def prepare_deepspeed(model: "Module", accelerator: "Accelerator"): """Prepares the model for DeepSpeed inference or evaluation by initializing it with the appropriate configuration. Adapted from accelerate: https://github.com/huggingface/accelerate/blob/739b135f8367becb67ffaada12fe76e3aa60fefd/src/accelerate/accelerator.py#L1473 """ import deepspeed # local import (instead of top-level) to avoid DS init interfering with other backends (like vllm): https://github.com/deepspeedai/DeepSpeed/issues/7252 deepspeed_plugin = accelerator.state.deepspeed_plugin config_kwargs = deepcopy(deepspeed_plugin.deepspeed_config) stage = config_kwargs["zero_optimization"]["stage"] if model is not None: hidden_size = ( max(model.config.hidden_sizes) if getattr(model.config, "hidden_sizes", None) else getattr(model.config, "hidden_size", None) ) if hidden_size is not None and stage == 3: # Note that `stage3_prefetch_bucket_size` can produce DeepSpeed messages like: `Invalidate trace cache # @ step 0: expected module 1, but got module 0` # This is expected and is not an error, see: https://github.com/microsoft/DeepSpeed/discussions/4081 config_kwargs.update( { "zero_optimization.reduce_bucket_size": hidden_size * hidden_size, "zero_optimization.stage3_param_persistence_threshold": 10 * hidden_size, "zero_optimization.stage3_prefetch_bucket_size": 0.9 * hidden_size * hidden_size, } ) # If ZeRO-3 is used, we shard both the active and reference model. # Otherwise, we assume the reference model fits in memory and is initialized on each device with ZeRO # disabled (stage 0) if stage != 3: config_kwargs["zero_optimization"]["stage"] = 0 model, *_ = deepspeed.initialize(model=model, config=config_kwargs) model.eval() return model def prepare_fsdp(model, accelerator): # Adapted from accelerate: https://github.com/huggingface/accelerate/blob/739b135f8367becb67ffaada12fe76e3aa60fefd/src/accelerate/accelerator.py#L1421 from torch.distributed.fsdp import FSDPModule from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP # Check if the model is already a FSDP model due to `Manual Wrapping` and if so, # don't wrap it again if not (isinstance(model, FSDP) or isinstance(model, FSDPModule)): accelerator.state.fsdp_plugin.set_auto_wrap_policy(model) fsdp_plugin = accelerator.state.fsdp_plugin kwargs = { "sharding_strategy": fsdp_plugin.sharding_strategy or fsdp_plugin.reshard_after_forward, "cpu_offload": fsdp_plugin.cpu_offload, "auto_wrap_policy": fsdp_plugin.auto_wrap_policy, "mixed_precision": fsdp_plugin.mixed_precision_policy, "sync_module_states": fsdp_plugin.sync_module_states, "backward_prefetch": fsdp_plugin.backward_prefetch, "forward_prefetch": fsdp_plugin.forward_prefetch, "use_orig_params": fsdp_plugin.use_orig_params, "param_init_fn": fsdp_plugin.param_init_fn, "ignored_modules": fsdp_plugin.ignored_modules, "limit_all_gathers": fsdp_plugin.limit_all_gathers, "device_id": accelerator.device, } model = FSDP(model, **kwargs) model.eval() return model class _ForwardRedirection: """Implements the `forward-redirection`. Taken from Pytorch-lightning: https://github.com/Lightning-AI/pytorch-lightning/blob/02311d03fb982560246eead7c08104481fac9579/src/lightning/pytorch/strategies/strategy.py#L602 A method call to a wrapped module gets rerouted through the wrapper's `forward` method instead. """ def __call__( self, wrapper_module: nn.Module, original_module: nn.Module, method: callable, *args: Any, **kwargs: Any ): """Reroutes a method call through the `wrapper_module`'s `forward` method. Args: wrapper_module: The module that has `original_module` wrapped. original_module: The module that was wrapped inside `wrapper_module`. method_name: The name of the method that should be called on the `original_module` after inputs get redirected through the `wrapper_module`'s `forward` method. *args: The positional arguments to the method `method_name`. They will get passed to a patched `forward` method instead. **kwargs: The keyword arguments to the method `method_name`. They will get passed to a patched `forward` method instead. """ original_forward = original_module.forward def wrapped_forward(*_args: Any, **_kwargs: Any) -> Any: # Unpatch ourselves immediately before calling the method `method_name` # because itself may want to call the real `forward` original_module.forward = original_forward # type: ignore[method-assign] # Call the actual method e.g. `.training_step(...)` out = method(*_args, **_kwargs) self.on_after_inner_forward(wrapper_module, original_module) return out # Patch the original_module's forward so we can redirect the arguments back to the real method original_module.forward = wrapped_forward # type: ignore[method-assign] wrapper_output = wrapper_module(*args, **kwargs) self.on_after_outer_forward(wrapper_module, original_module) return wrapper_output def on_after_inner_forward(self, wrapper_module: nn.Module, original_module: nn.Module) -> None: pass def on_after_outer_forward(self, wrapper_module: nn.Module, original_module: nn.Module) -> None: pass def enable_gradient_checkpointing( model: PreTrainedModel, gradient_checkpointing_kwargs: Optional[dict] ) -> PreTrainedModel: """Enables gradient checkpointing for the model.""" # Enable gradient checkpointing on the base model for PEFT if is_peft_model(model): model.base_model.gradient_checkpointing_enable() # Enable gradient checkpointing for non-PEFT models else: model.gradient_checkpointing_enable() gradient_checkpointing_kwargs = gradient_checkpointing_kwargs or {} use_reentrant = ( "use_reentrant" not in gradient_checkpointing_kwargs or gradient_checkpointing_kwargs["use_reentrant"] ) if use_reentrant: if hasattr(model, "enable_input_require_grads"): model.enable_input_require_grads() else: def make_inputs_require_grad(module, input, output): output.requires_grad_(True) model.get_input_embeddings().register_forward_hook(make_inputs_require_grad) return model def peft_module_casting_to_bf16(model): for name, module in model.named_modules(): if isinstance(module, torch.nn.LayerNorm) or "norm" in name: module = module.to(torch.float32) elif any(x in name for x in ["lm_head", "embed_tokens", "wte", "wpe"]): if hasattr(module, "weight"): if module.weight.dtype == torch.float32: module = module.to(torch.bfloat16) def prepare_peft_model( model: PreTrainedModel, peft_config: Optional["PeftConfig"], args: TrainingArguments ) -> PreTrainedModel: """Prepares a model for PEFT training.""" if not is_peft_available(): raise ImportError("PEFT is required to use a peft model. Run `pip install peft`.") # If the model is already a PeftModel, we need to merge and unload it. # Further information here: https://huggingface.co/docs/trl/dpo_trainer#reference-model-considerations-with-peft if isinstance(model, PeftModel) and peft_config is not None: model = model.merge_and_unload() # Handle quantized models (QLoRA) is_qlora = getattr(model, "is_loaded_in_4bit", False) or getattr(model, "is_loaded_in_8bit", False) is_sharded_qlora = False if getattr(model, "is_loaded_in_4bit", False): # Check if model is sharded (FSDP/DS-Zero3) for _, param in model.named_parameters(): if param.__class__.__name__ == "Params4bit": is_sharded_qlora = param.data.device.type in {"cpu", "meta"} break # Prepare model for kbit training if needed if is_qlora and not is_sharded_qlora: model = prepare_model_for_kbit_training( model, use_gradient_checkpointing=args.gradient_checkpointing, gradient_checkpointing_kwargs=args.gradient_checkpointing_kwargs or {}, ) # Disable gradient checkpointing as it's handled by prepare_model_for_kbit_training args = dataclasses.replace(args, gradient_checkpointing=False) elif args.gradient_checkpointing: model = enable_gradient_checkpointing(model, args.gradient_checkpointing_kwargs) # Create PEFT model if peft_config is not None: if ( version.parse(peft.__version__) >= version.parse("0.12") # autocast_adapter_dtype introduced in 0.12 and getattr(model, "is_loaded_in_4bit", False) and is_sharded_qlora ): model = get_peft_model(model, peft_config, autocast_adapter_dtype=False) else: model = get_peft_model(model, peft_config) # Handle bf16 casting for 4-bit models if args.bf16 and getattr(model, "is_loaded_in_4bit", False) and not is_sharded_qlora: peft_module_casting_to_bf16(model) return model
trl/trl/models/utils.py/0
{ "file_path": "trl/trl/models/utils.py", "repo_id": "trl", "token_count": 9219 }
631
# Copyright 2020-2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import textwrap import warnings from collections import defaultdict from pathlib import Path from typing import Any, Callable, Optional, Union import torch from accelerate import Accelerator, logging from accelerate.utils import ProjectConfiguration, set_seed from huggingface_hub import PyTorchModelHubMixin from transformers import is_wandb_available from ..models import DDPOStableDiffusionPipeline from .alignprop_config import AlignPropConfig from .utils import generate_model_card, get_comet_experiment_url if is_wandb_available(): import wandb logger = logging.get_logger(__name__) class AlignPropTrainer(PyTorchModelHubMixin): """ The AlignPropTrainer uses Deep Diffusion Policy Optimization to optimise diffusion models. Note, this trainer is heavily inspired by the work here: https://github.com/mihirp1998/AlignProp/ As of now only Stable Diffusion based pipelines are supported Attributes: config (`AlignPropConfig`): Configuration object for AlignPropTrainer. Check the documentation of `PPOConfig` for more details. reward_function (`Callable[[torch.Tensor, tuple[str], tuple[Any]], torch.Tensor]`): Reward function to be used prompt_function (`Callable[[], tuple[str, Any]]`): Function to generate prompts to guide model sd_pipeline (`DDPOStableDiffusionPipeline`): Stable Diffusion pipeline to be used for training. image_samples_hook (`Optional[Callable[[Any, Any, Any], Any]]`): Hook to be called to log images """ _tag_names = ["trl", "alignprop"] def __init__( self, config: AlignPropConfig, reward_function: Callable[[torch.Tensor, tuple[str], tuple[Any]], torch.Tensor], prompt_function: Callable[[], tuple[str, Any]], sd_pipeline: DDPOStableDiffusionPipeline, image_samples_hook: Optional[Callable[[Any, Any, Any], Any]] = None, ): warnings.warn( "AlignPropTrainer is deprecated and will be removed in version 0.23.0.", DeprecationWarning, ) if image_samples_hook is None: logger.warning("No image_samples_hook provided; no images will be logged") self.prompt_fn = prompt_function self.reward_fn = reward_function self.config = config self.image_samples_callback = image_samples_hook accelerator_project_config = ProjectConfiguration(**self.config.project_kwargs) if self.config.resume_from: self.config.resume_from = os.path.normpath(os.path.expanduser(self.config.resume_from)) if "checkpoint_" not in os.path.basename(self.config.resume_from): # get the most recent checkpoint in this directory checkpoints = list( filter( lambda x: "checkpoint_" in x, os.listdir(self.config.resume_from), ) ) if len(checkpoints) == 0: raise ValueError(f"No checkpoints found in {self.config.resume_from}") checkpoint_numbers = sorted([int(x.split("_")[-1]) for x in checkpoints]) self.config.resume_from = os.path.join( self.config.resume_from, f"checkpoint_{checkpoint_numbers[-1]}", ) accelerator_project_config.iteration = checkpoint_numbers[-1] + 1 self.accelerator = Accelerator( log_with=self.config.log_with, mixed_precision=self.config.mixed_precision, project_config=accelerator_project_config, # we always accumulate gradients across timesteps; we want config.train.gradient_accumulation_steps to be the # number of *samples* we accumulate across, so we need to multiply by the number of training timesteps to get # the total number of optimizer steps to accumulate across. gradient_accumulation_steps=self.config.train_gradient_accumulation_steps, **self.config.accelerator_kwargs, ) is_using_tensorboard = config.log_with is not None and config.log_with == "tensorboard" if self.accelerator.is_main_process: self.accelerator.init_trackers( self.config.tracker_project_name, config=dict(alignprop_trainer_config=config.to_dict()) if not is_using_tensorboard else config.to_dict(), init_kwargs=self.config.tracker_kwargs, ) logger.info(f"\n{config}") set_seed(self.config.seed, device_specific=True) self.sd_pipeline = sd_pipeline self.sd_pipeline.set_progress_bar_config( position=1, disable=not self.accelerator.is_local_main_process, leave=False, desc="Timestep", dynamic_ncols=True, ) # For mixed precision training we cast all non-trainable weights (vae, non-lora text_encoder and non-lora unet) to half-precision # as these weights are only used for inference, keeping weights in full precision is not required. if self.accelerator.mixed_precision == "fp16": inference_dtype = torch.float16 elif self.accelerator.mixed_precision == "bf16": inference_dtype = torch.bfloat16 else: inference_dtype = torch.float32 self.sd_pipeline.vae.to(self.accelerator.device, dtype=inference_dtype) self.sd_pipeline.text_encoder.to(self.accelerator.device, dtype=inference_dtype) self.sd_pipeline.unet.to(self.accelerator.device, dtype=inference_dtype) trainable_layers = self.sd_pipeline.get_trainable_layers() self.accelerator.register_save_state_pre_hook(self._save_model_hook) self.accelerator.register_load_state_pre_hook(self._load_model_hook) # Enable TF32 for faster training on Ampere GPUs, # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices if self.config.allow_tf32: torch.backends.cuda.matmul.allow_tf32 = True self.optimizer = self._setup_optimizer( trainable_layers.parameters() if not isinstance(trainable_layers, list) else trainable_layers ) self.neg_prompt_embed = self.sd_pipeline.text_encoder( self.sd_pipeline.tokenizer( [""] if self.config.negative_prompts is None else self.config.negative_prompts, return_tensors="pt", padding="max_length", truncation=True, max_length=self.sd_pipeline.tokenizer.model_max_length, ).input_ids.to(self.accelerator.device) )[0] # NOTE: for some reason, autocast is necessary for non-lora training but for lora training it isn't necessary and it uses # more memory self.autocast = self.sd_pipeline.autocast or self.accelerator.autocast if hasattr(self.sd_pipeline, "use_lora") and self.sd_pipeline.use_lora: unet, self.optimizer = self.accelerator.prepare(trainable_layers, self.optimizer) self.trainable_layers = list(filter(lambda p: p.requires_grad, unet.parameters())) else: self.trainable_layers, self.optimizer = self.accelerator.prepare(trainable_layers, self.optimizer) if config.resume_from: logger.info(f"Resuming from {config.resume_from}") self.accelerator.load_state(config.resume_from) self.first_epoch = int(config.resume_from.split("_")[-1]) + 1 else: self.first_epoch = 0 def compute_rewards(self, prompt_image_pairs): reward, reward_metadata = self.reward_fn( prompt_image_pairs["images"], prompt_image_pairs["prompts"], prompt_image_pairs["prompt_metadata"] ) return reward def step(self, epoch: int, global_step: int): """ Perform a single step of training. Args: epoch (int): The current epoch. global_step (int): The current global step. Side Effects: - Model weights are updated - Logs the statistics to the accelerator trackers. - If `self.image_samples_callback` is not None, it will be called with the prompt_image_pairs, global_step, and the accelerator tracker. Returns: global_step (int): The updated global step. """ info = defaultdict(list) self.sd_pipeline.unet.train() for _ in range(self.config.train_gradient_accumulation_steps): with self.accelerator.accumulate(self.sd_pipeline.unet), self.autocast(), torch.enable_grad(): prompt_image_pairs = self._generate_samples( batch_size=self.config.train_batch_size, ) rewards = self.compute_rewards(prompt_image_pairs) prompt_image_pairs["rewards"] = rewards rewards_vis = self.accelerator.gather(rewards).detach().cpu().numpy() loss = self.calculate_loss(rewards) self.accelerator.backward(loss) if self.accelerator.sync_gradients: self.accelerator.clip_grad_norm_( self.trainable_layers.parameters() if not isinstance(self.trainable_layers, list) else self.trainable_layers, self.config.train_max_grad_norm, ) self.optimizer.step() self.optimizer.zero_grad() info["reward_mean"].append(rewards_vis.mean()) info["reward_std"].append(rewards_vis.std()) info["loss"].append(loss.item()) # Checks if the accelerator has performed an optimization step behind the scenes if self.accelerator.sync_gradients: # log training-related stuff info = {k: torch.mean(torch.tensor(v)) for k, v in info.items()} info = self.accelerator.reduce(info, reduction="mean") info.update({"epoch": epoch}) self.accelerator.log(info, step=global_step) global_step += 1 info = defaultdict(list) else: raise ValueError( "Optimization step should have been performed by this point. Please check calculated gradient accumulation settings." ) # Logs generated images if self.image_samples_callback is not None and global_step % self.config.log_image_freq == 0: self.image_samples_callback(prompt_image_pairs, global_step, self.accelerator.trackers[0]) if epoch != 0 and epoch % self.config.save_freq == 0 and self.accelerator.is_main_process: self.accelerator.save_state() return global_step def calculate_loss(self, rewards): """ Calculate the loss for a batch of an unpacked sample Args: rewards (torch.Tensor): Differentiable reward scalars for each generated image, shape: [batch_size] Returns: loss (torch.Tensor) (all of these are of shape (1,)) """ # Loss is specific to Aesthetic Reward function used in AlignProp (https://huggingface.co/papers/2310.03739) loss = 10.0 - (rewards).mean() return loss def loss( self, advantages: torch.Tensor, clip_range: float, ratio: torch.Tensor, ): unclipped_loss = -advantages * ratio clipped_loss = -advantages * torch.clamp( ratio, 1.0 - clip_range, 1.0 + clip_range, ) return torch.mean(torch.maximum(unclipped_loss, clipped_loss)) def _setup_optimizer(self, trainable_layers_parameters): if self.config.train_use_8bit_adam: import bitsandbytes optimizer_cls = bitsandbytes.optim.AdamW8bit else: optimizer_cls = torch.optim.AdamW return optimizer_cls( trainable_layers_parameters, lr=self.config.train_learning_rate, betas=(self.config.train_adam_beta1, self.config.train_adam_beta2), weight_decay=self.config.train_adam_weight_decay, eps=self.config.train_adam_epsilon, ) def _save_model_hook(self, models, weights, output_dir): self.sd_pipeline.save_checkpoint(models, weights, output_dir) weights.pop() # ensures that accelerate doesn't try to handle saving of the model def _load_model_hook(self, models, input_dir): self.sd_pipeline.load_checkpoint(models, input_dir) models.pop() # ensures that accelerate doesn't try to handle loading of the model def _generate_samples(self, batch_size, with_grad=True, prompts=None): """ Generate samples from the model Args: batch_size (int): Batch size to use for sampling with_grad (bool): Whether the generated RGBs should have gradients attached to it. Returns: prompt_image_pairs (dict[Any]) """ prompt_image_pairs = {} sample_neg_prompt_embeds = self.neg_prompt_embed.repeat(batch_size, 1, 1) if prompts is None: prompts, prompt_metadata = zip(*[self.prompt_fn() for _ in range(batch_size)]) else: prompt_metadata = [{} for _ in range(batch_size)] prompt_ids = self.sd_pipeline.tokenizer( prompts, return_tensors="pt", padding="max_length", truncation=True, max_length=self.sd_pipeline.tokenizer.model_max_length, ).input_ids.to(self.accelerator.device) prompt_embeds = self.sd_pipeline.text_encoder(prompt_ids)[0] if with_grad: sd_output = self.sd_pipeline.rgb_with_grad( prompt_embeds=prompt_embeds, negative_prompt_embeds=sample_neg_prompt_embeds, num_inference_steps=self.config.sample_num_steps, guidance_scale=self.config.sample_guidance_scale, eta=self.config.sample_eta, truncated_backprop_rand=self.config.truncated_backprop_rand, truncated_backprop_timestep=self.config.truncated_backprop_timestep, truncated_rand_backprop_minmax=self.config.truncated_rand_backprop_minmax, output_type="pt", ) else: sd_output = self.sd_pipeline( prompt_embeds=prompt_embeds, negative_prompt_embeds=sample_neg_prompt_embeds, num_inference_steps=self.config.sample_num_steps, guidance_scale=self.config.sample_guidance_scale, eta=self.config.sample_eta, output_type="pt", ) images = sd_output.images prompt_image_pairs["images"] = images prompt_image_pairs["prompts"] = prompts prompt_image_pairs["prompt_metadata"] = prompt_metadata return prompt_image_pairs def train(self, epochs: Optional[int] = None): """ Train the model for a given number of epochs """ global_step = 0 if epochs is None: epochs = self.config.num_epochs for epoch in range(self.first_epoch, epochs): global_step = self.step(epoch, global_step) def _save_pretrained(self, save_directory): self.sd_pipeline.save_pretrained(save_directory) self.create_model_card() # Ensure the model card is saved along with the checkpoint def _save_checkpoint(self, model, trial): if self.args.hub_model_id is None: model_name = Path(self.args.output_dir).name else: model_name = self.args.hub_model_id.split("/")[-1] self.create_model_card(model_name=model_name) super()._save_checkpoint(model, trial) def create_model_card( self, model_name: Optional[str] = None, dataset_name: Optional[str] = None, tags: Union[str, list[str], None] = None, ): """ Creates a draft of a model card using the information available to the `Trainer`. Args: model_name (`str` or `None`, *optional*, defaults to `None`): Name of the model. dataset_name (`str` or `None`, *optional*, defaults to `None`): Name of the dataset used for training. tags (`str`, `list[str]` or `None`, *optional*, defaults to `None`): Tags to be associated with the model card. """ if not self.is_world_process_zero(): return if hasattr(self.model.config, "_name_or_path") and not os.path.isdir(self.model.config._name_or_path): base_model = self.model.config._name_or_path else: base_model = None # normalize `tags` to a mutable set if tags is None: tags = set() elif isinstance(tags, str): tags = {tags} else: tags = set(tags) if hasattr(self.model.config, "unsloth_version"): tags.add("unsloth") tags.update(self._tag_names) # docstyle-ignore citation = textwrap.dedent("""\ @article{prabhudesai2024aligning, title = {{Aligning Text-to-Image Diffusion Models with Reward Backpropagation}}, author = {Mihir Prabhudesai and Anirudh Goyal and Deepak Pathak and Katerina Fragkiadaki}, year = 2024, eprint = {arXiv:2310.03739} }""") model_card = generate_model_card( base_model=base_model, model_name=model_name, hub_model_id=self.hub_model_id, dataset_name=dataset_name, tags=tags, wandb_url=wandb.run.url if is_wandb_available() and wandb.run is not None else None, comet_url=get_comet_experiment_url(), trainer_name="AlignProp", trainer_citation=citation, paper_title="Aligning Text-to-Image Diffusion Models with Reward Backpropagation", paper_id="2310.03739", ) model_card.save(os.path.join(self.args.output_dir, "README.md"))
trl/trl/trainer/alignprop_trainer.py/0
{ "file_path": "trl/trl/trainer/alignprop_trainer.py", "repo_id": "trl", "token_count": 8589 }
632
# Copyright 2020-2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import concurrent.futures import logging from abc import ABC, abstractmethod from typing import Optional, Union import numpy as np from accelerate import Accelerator from huggingface_hub import InferenceClient from transformers.utils import is_openai_available from ..import_utils import is_llm_blender_available if is_llm_blender_available(): import llm_blender if is_openai_available(): from openai import OpenAI DEFAULT_PAIRWISE_SYSTEM_PROMPT = '''I require a leaderboard for various large language models. I'll provide you with prompts given to these models and their corresponding outputs. Your task is to assess these responses, and select the model that produces the best output from a human perspective. ## Instruction {{ "instruction": """{prompt}""", }} ## Model Outputs Here are the unordered outputs from the models. Each output is associated with a specific model, identified by a unique model identifier. {{ {{ "model_identifier": "0", "output": """{response0}""" }}, {{ "model_identifier": "1", "output": """{response1}""" }} }} ## Task Evaluate the models on the basis of the quality and relevance of their results, and select the model that generated the best result. Reply with the identifier of the best model. Our evaluation will only take into account the first character of your answer, so make sure it contains only one of the identifiers and nothing else (no quotation marks, no spaces, no new lines, ...). ''' class BaseJudge(ABC): """ Base class for judges. The subclasses of this class should implement the `judge` method. """ @abstractmethod def judge(self, prompts: list[str], completions: list[str], shuffle_order: bool = True) -> list: raise NotImplementedError("Judge subclasses must implement the `judge` method.") class BaseRankJudge(ABC): """ Base class for LLM ranking judges. **Example**: ```python class MyRankJudge(BaseRankJudge): def judge(self, prompts, completions, shuffle_order=True): return ... # Your ranking logic here judge = MyRankJudge() judge.judge( prompts=["The capital of France is", "The capital of Germany is"], completions=[[" Paris", " Marseille", "Lyon"], [" Munich", " Berlin"]], ) # [[0, 1, 2], [1, 0]] ``` """ @abstractmethod def judge(self, prompts: list[str], completions: list[list[str]], shuffle_order: bool = True) -> list[list[int]]: """ Judge the completion for the given prompts and return the ranks of each completion. Args: prompts (`list[str]`): List of prompts. completions (`list[list[str]]`): List of completions list, where each element is a list of completions for the corresponding prompt. shuffle_order (`bool`, *optional*, defaults to `True`): Whether to shuffle the order of the completions to avoid positional bias. Returns: `list[list[int]]`: List of lists of idxs, where each list contains the ranks of the completions for the corresponding prompt. E.g., `[1, 2, 0]` means that the second completion (`idx=1`) is the best, followed by the third, and then the first. """ raise NotImplementedError("Judge subclasses must implement the `judge` method.") class BasePairwiseJudge(BaseJudge): """ Base class for pairwise judges. """ @abstractmethod def judge(self, prompts: list[str], completions: list[list[str]], shuffle_order: bool = True) -> list[int]: """ Judge the completion pairs for the given prompts. Args: prompts (`list[str]`): List of prompts. completions (`list[list[str]]`): List of completions pairs, where each element is a pair of completions for the corresponding prompt. shuffle_order (`bool`, *optional*, defaults to `True`): Whether to shuffle the order of the completions to avoid positional bias. Returns: `list[int]`: List of idxs, where each idx is the rank of the best completion for the corresponding prompt. E.g., `1` means that the second completion (`idx=1`) is the best. Note: If the judge returns `-1` for any prompt, it indicates that the inner process used to compute the preference has failed. For instance, this could occur if the underlying language model returned an invalid answer. In such cases, the caller should handle these invalid indices appropriately, possibly by implementing fallback logic or error handling. """ raise NotImplementedError("Judge subclasses must implement the `judge` method.") class BaseBinaryJudge(BaseJudge): """ Base class for binary judges. """ @abstractmethod def judge( self, prompts: list[str], completions: list[str], gold_completions: Optional[list[str]] = None, shuffle_order: bool = True, ) -> list[int]: """ Judge the completion for a given prompt. Used to assess if a completion satisfies a constraint. This base class should be used to implement binary evaluations as done in section 4.1.4 of the [CGPO paper](https://huggingface.co/papers/2409.20370). It is relevant for assessing whether a prompt-completion pair satisfies a specific constraint. Args: prompts (`list[str]`): List of prompts. completions (`list[str]`): List of completions. gold_completions (`list[str]`, `optional`): List of gold completions if it exists. shuffle_order (`bool`): Whether to shuffle the order of the completions to avoid positional bias. Returns: list[int]: A list of binary labels: - 1 indicates that the completion satisfies the evaluated constraint. - 0 indicates that the completion does not satisfy the evaluated constraint. Note: If the judge returns -1 for any prompt, it indicates that the inner process used to compute the preference has failed. For instance, this could occur if the underlying language model or rule based constraint returned an invalid answer. In such cases, the caller should handle these invalid indices appropriately, possibly by implementing fallback logic or error handling. """ raise NotImplementedError("Judge subclasses must implement the `judge` method.") class PairRMJudge(BasePairwiseJudge): """ LLM judge based on the PairRM model from AllenAI. This judge uses the PairRM model to rank pairs of completions for given prompts. It's designed for pairwise comparison of language model outputs. The PairRM model is loaded using the llm-blender library and runs on the default Accelerator device. **Attributes**: blender (`llm_blender.Blender`): An instance of the Blender class from llm-blender. **Example**: ```python >>> pairrm_judge = PairRMJudge() >>> prompts = ["Translate 'hello' to French", "What's the capital of Japan?"] >>> completions = [["Bonjour", "Salut"], ["Kyoto", "Tokyo"]] >>> results = pairrm_judge.judge(prompts, completions) >>> print(results) # [0, 1] (indicating the first completion is preferred for the first prompt and the second) ``` <Tip> This class requires the llm-blender library to be installed. Install it with: `pip install llm-blender`. </Tip> """ def __init__(self): if not is_llm_blender_available(): raise ValueError("llm-blender is not installed. Please install it with `pip install llm-blender`.") self.blender = llm_blender.Blender() self.blender.loadranker("llm-blender/PairRM", device=Accelerator().device) def judge( self, prompts: list[str], completions: list[list[str]], shuffle_order: bool = True, return_scores: bool = False, temperature: float = 1.0, ) -> list[Union[int, float]]: """ Judge the completion pairs for the given prompts using the PairRM model. Args: prompts (`list[str]`): List of prompts to judge. completions (`list[list[str]]`): List of completion pairs for each prompt. shuffle_order (`bool`, *optional*, defaults to `True`): Whether to shuffle the order of the completions to avoid positional bias. return_scores (`bool`, *optional*, defaults to `False`): If `True`, return probability scores of the first completion instead of ranks (i.e. a *soft-judge*). temperature (`float`, *optional*, defaults to `1.0`): Temperature for scaling logits if `return_scores` is True. Returns: `Union[list[int, float]]`: If `return_scores` is `False`, returns a list of ranks (`0` or `1`) for each prompt, indicating which completion is preferred. If `return_scores` is `True`, returns softmax probabilities for the first completion. Raises: `ValueError`: If the number of completions per prompt is not exactly 2. Note: Unlike llm-blender, ranks are 0-indexed (`0` means the first completion is preferred). """ if len(completions[0]) != 2: raise ValueError("PairRM judge requires exactly 2 completions per prompt.") # Shuffle the order of the completions to avoid positional bias if shuffle_order: flip_mask = np.random.choice([True, False], size=len(prompts)) completions = [pair[::-1] if flip else pair for flip, pair in zip(flip_mask, completions)] # Rank the completions ranks = self.blender.rank(prompts, completions, return_scores=return_scores, disable_tqdm=True) if not return_scores: ranks -= 1 # PairRM rank is 1-indexed, so we subtract 1 to make it 0-indexed else: # scale the logits by temperature ranks /= temperature # Flip back the ranks or scores to the original order if needed if shuffle_order: ranks[flip_mask] = ranks[flip_mask][:, ::-1] # Return the ranks or score probability if return_scores: logit_max = np.amax(ranks, axis=-1, keepdims=True) exp_logit_shifted = np.exp(ranks - logit_max) probs = exp_logit_shifted / np.sum(exp_logit_shifted, axis=-1, keepdims=True) return probs[:, 0].tolist() else: return ranks[:, 0].tolist() class HfPairwiseJudge(BasePairwiseJudge): """ Pairwise judge based on the Hugging Face API with chat completion. This judge is relevant for assessing the quality chat models, where the completion is a response to a given prompt. Args: model (`str`, *optional*, defaults to `"meta-llama/Meta-Llama-3-70B-Instruct"`): Model to use for the judge. token (`str`, *optional*): Hugging Face API token to use for the [`huggingface_hub.InferenceClient`]. system_prompt (`str` or `None`, *optional*, defaults to `None`): The system prompt to be used for the judge. If not provided, a default prompt is used. Note that the system prompt should contain the following placeholders: `{prompt}`, `{response0}`, and `{response1}`. Also, the inference is called with `max_tokens=1`, consequently the system prompt should ask for a single token response. """ def __init__( self, model="meta-llama/Meta-Llama-3-70B-Instruct", token: Optional[str] = None, system_prompt: Optional[str] = None, ): self.client = InferenceClient(model=model, token=token) self.system_prompt = system_prompt or DEFAULT_PAIRWISE_SYSTEM_PROMPT def judge(self, prompts: list[str], completions: list[list[str]], shuffle_order: bool = True) -> list[int]: # Shuffle the order of the completions to avoid positional bias if shuffle_order: flip_mask = np.random.choice([True, False], size=len(prompts)) completions = [pair[::-1] if flip else pair for flip, pair in zip(flip_mask, completions)] # Define a function to get the rank for a single prompt, will be called concurrently def get_rank(prompt, candidates): content = self.system_prompt.format(prompt=prompt, response0=candidates[0], response1=candidates[1]) completion = self.client.chat_completion(messages=[{"role": "user", "content": content}], max_tokens=1) response = completion.choices[0].message.content if response in ["0", "1"]: return int(response) else: logging.debug(f"Invalid response from the judge model: '{response}'. Returning -1.") return -1 # Call the completions concurrently with concurrent.futures.ThreadPoolExecutor() as executor: ranks = list(executor.map(get_rank, prompts, completions)) # Flip back the ranks to the original order if needed if shuffle_order: ranks = [ranks[i] if not flip else 1 - ranks[i] for i, flip in enumerate(flip_mask)] # Return the ranks return ranks class OpenAIPairwiseJudge(BasePairwiseJudge): """ Judge based on the OpenAI API. This judge is relevant for assessing the quality chat models, where the completion is a response to a given prompt. Args: model (`str`, *optional*, defaults to `"gpt-4-turbo-preview"`): Model to use for the judge. system_prompt (`str` or `None`, *optional*, defaults to `None`): System prompt to be used for the judge. If not provided, a default prompt is used. Note that the system prompt should contain the following placeholders: `{prompt}`, `{response0}`, and `{response1}`. Also, the inference is called with `max_tokens=1`, consequently the system prompt should ask for a single token response. max_requests (`int` or `None`, *optional*, defaults to `1000`): Maximum number of requests to make to the OpenAI API. If set to `None`, there is no limit. """ def __init__( self, model="gpt-4-turbo-preview", system_prompt: Optional[str] = None, max_requests: Union[int, None] = 1_000 ): if not is_openai_available(): raise ValueError("OpenAI client is not installed. Please install it with 'pip install openai'.") self.client = OpenAI() self.model = model self.system_prompt = system_prompt or DEFAULT_PAIRWISE_SYSTEM_PROMPT self.max_requests = max_requests self.num_requests = 0 self._warned = False def judge(self, prompts: list[str], completions: list[list[str]], shuffle_order: bool = True) -> list[int]: # Check if the limit of requests is reached, if so, use random choice instead if self.max_requests is not None and self.num_requests >= self.max_requests: if not self._warned: # Print the warning only once logging.warning( f"Reached the maximum number of requests ({self.max_requests}). From now on, returning -1 instead. " " To increase the limit, set `max_requests` to a higher value, or to `None` for no limit." ) self._warned = True return [-1] * len(prompts) # Shuffle the order of the completions to avoid positional bias if shuffle_order: flip_mask = np.random.choice([True, False], size=len(prompts)) completions = [pair[::-1] if flip else pair for flip, pair in zip(flip_mask, completions)] # Define a function to get the rank for a single prompt, will be called concurrently def get_rank(prompt, candidates): content = self.system_prompt.format(prompt=prompt, response0=candidates[0], response1=candidates[1]) messages = [{"role": "user", "content": content}] completion = self.client.chat.completions.create(model=self.model, messages=messages, max_tokens=1) response = completion.choices[0].message.content if response in ["0", "1"]: return int(response) else: logging.debug(f"Invalid response from the judge model: '{response}'. Returning -1.") return -1 # Call the completions concurrently with concurrent.futures.ThreadPoolExecutor() as executor: ranks = list(executor.map(get_rank, prompts, completions)) # Flip back the ranks to the original order if needed if shuffle_order: ranks = [ranks[i] if not flip else 1 - ranks[i] for i, flip in enumerate(flip_mask)] # Update the number of requests self.num_requests += len(prompts) # Return the ranks return ranks class AllTrueJudge(BaseBinaryJudge): """ Unify the decision of multiple [`BaseBinaryJudge`] instances. Returns `1` only if all inner binary judges return `1`. If any judge returns `0`, it returns `0`. If any judge returns `-1`, indicating a failure in its process, this judge will also return `-1`. Implements the Mixture of Judges as described in the [CGPO paper](https://huggingface.co/papers/2409.20370). Args: judges (`list[BaseBinaryJudge]`): A list of [`BaseBinaryJudge`] instances whose decisions will be unified. """ def __init__(self, judges: list[BaseBinaryJudge]): self.judges = judges def judge( self, prompts: list[str], completions: list[str], gold_completions: Optional[list[str]] = None, shuffle_order: bool = True, ) -> list[int]: all_binary_judgments = [ judge.judge(prompts, completions, gold_completions, shuffle_order) for judge in self.judges ] output = [] for binary_judgments in zip(*all_binary_judgments): # Check that all values are in {0, 1, -1} if any(binary_judgment not in {0, 1, -1} for binary_judgment in binary_judgments): raise ValueError( f"Invalid binary judgment: {binary_judgments}, expected list of values in {{0, 1, -1}}." ) # Unify the decision if -1 in binary_judgments: output.append(-1) elif all(binary_judgment == 1 for binary_judgment in binary_judgments): output.append(1) else: output.append(0) return output
trl/trl/trainer/judges.py/0
{ "file_path": "trl/trl/trainer/judges.py", "repo_id": "trl", "token_count": 7504 }
633
# Copyright 2020-2025 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os from dataclasses import dataclass, field from ..trainer.utils import OnPolicyConfig @dataclass class RLOOConfig(OnPolicyConfig): r""" Configuration class for the [`RLOOTrainer`]. This class includes only the parameters that are specific to RLOO training. For a full list of training arguments, please refer to the [`~transformers.TrainingArguments`] and [`OnPolicyConfig`] documentation. Note that default values in this class may differ from those in [`~transformers.TrainingArguments`]. Using [`~transformers.HfArgumentParser`] we can turn this class into [argparse](https://docs.python.org/3/library/argparse#module-argparse) arguments that can be specified on the command line. Parameters: exp_name (`str`, *optional*, defaults to `os.path.basename(__file__)[: -len(".py")]`): Name of this experiment. reward_model_path (`str`, *optional*, defaults to `"EleutherAI/pythia-160m"`): Path to the reward model. num_ppo_epochs (`int`, *optional*, defaults to `4`): Number of epochs to train. whiten_rewards (`bool`, *optional*, defaults to `False`): Whether to whiten the rewards. kl_coef (`float`, *optional*, defaults to `0.05`): KL coefficient. cliprange (`float`, *optional*, defaults to `0.2`): Clip range. rloo_k (`int`, *optional*, defaults to `2`): REINFORCE Leave-One-Out (RLOO) number of online samples per prompt. normalize_reward (`bool`, *optional*, defaults to `False`): Whether to normalize rewards. reward_clip_range (`float`, *optional*, defaults to `10.0`): Clip range for rewards. normalize_advantage (`bool`, *optional*, defaults to `False`): Whether to normalize advantages. token_level_kl (`bool`, *optional*, defaults to `True`): Whether to use token-level KL penalty or sequence-level KL penalty. ds3_gather_for_generation (`bool`, *optional*, defaults to `True`): This setting applies to DeepSpeed ZeRO-3. If enabled, the policy model weights are gathered for generation, improving generation speed. However, disabling this option allows training models that exceed the VRAM capacity of a single GPU, albeit at the cost of slower generation. """ exp_name: str = field( default=os.path.basename(__file__)[:-3], metadata={"help": "Name of this experiment."}, ) reward_model_path: str = field( default="EleutherAI/pythia-160m", metadata={"help": "Path to the reward model."}, ) num_ppo_epochs: int = field( default=4, metadata={"help": "Number of epochs to train."}, ) whiten_rewards: bool = field( default=False, metadata={"help": "Whether to whiten the rewards."}, ) kl_coef: float = field( default=0.05, metadata={"help": "KL coefficient."}, ) cliprange: float = field( default=0.2, metadata={"help": "Clip range."}, ) rloo_k: int = field( default=2, metadata={"help": "REINFORCE Leave-One-Out (RLOO) number of online samples per prompt."}, ) normalize_reward: bool = field( default=False, metadata={"help": "Whether to normalize rewards"}, ) reward_clip_range: float = field( default=10.0, metadata={"help": "Clip range for rewards"}, ) normalize_advantage: bool = field( default=False, metadata={"help": "Whether to normalize advantages"}, ) token_level_kl: bool = field( default=False, metadata={"help": "Whether to use token-level KL penalty or sequence-level KL penalty"}, ) ds3_gather_for_generation: bool = field( default=True, metadata={ "help": "This setting applies to DeepSpeed ZeRO-3. If enabled, the policy model weights are gathered for " "generation, improving generation speed. However, disabling this option allows training models that " "exceed the VRAM capacity of a single GPU, albeit at the cost of slower generation." }, )
trl/trl/trainer/rloo_config.py/0
{ "file_path": "trl/trl/trainer/rloo_config.py", "repo_id": "trl", "token_count": 1802 }
634
# AI Agent Observability & Evaluation ![Bonus Unit 2 Thumbnail](https://langfuse.com/images/cookbook/huggingface-agent-course/agent-observability-and-evaluation.png) Welcome to **Bonus Unit 2**! In this chapter, you'll explore advanced strategies for observing, evaluating, and ultimately improving the performance of your agents. --- ## 📚 When Should I Do This Bonus Unit? This bonus unit is perfect if you: - **Develop and Deploy AI Agents:** You want to ensure that your agents are performing reliably in production. - **Need Detailed Insights:** You're looking to diagnose issues, optimize performance, or understand the inner workings of your agent. - **Aim to Reduce Operational Overhead:** By monitoring agent costs, latency, and execution details, you can efficiently manage resources. - **Seek Continuous Improvement:** You’re interested in integrating both real-time user feedback and automated evaluation into your AI applications. In short, for everyone who wants to bring their agents in front of users! --- ## 🤓 What You’ll Learn In this unit, you'll learn: - **Instrument Your Agent:** Learn how to integrate observability tools via OpenTelemetry with the *smolagents* framework. - **Monitor Metrics:** Track performance indicators such as token usage (costs), latency, and error traces. - **Evaluate in Real-Time:** Understand techniques for live evaluation, including gathering user feedback and leveraging an LLM-as-a-judge. - **Offline Analysis:** Use benchmark datasets (e.g., GSM8K) to test and compare agent performance. --- ## 🚀 Ready to Get Started? In the next section, you'll learn the basics of Agent Observability and Evaluation. After that, its time to see it in action!
agents-course/units/en/bonus-unit2/introduction.mdx/0
{ "file_path": "agents-course/units/en/bonus-unit2/introduction.mdx", "repo_id": "agents-course", "token_count": 426 }
0
# Understanding AI Agents through the Thought-Action-Observation Cycle <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-check-3.jpg" alt="Unit 1 planning"/> In the previous sections, we learned: - **How tools are made available to the agent in the system prompt**. - **How AI agents are systems that can 'reason', plan, and interact with their environment**. In this section, **we’ll explore the complete AI Agent Workflow**, a cycle we defined as Thought-Action-Observation. And then, we’ll dive deeper into each of these steps. ## The Core Components Agents' work is a continuous cycle of: **thinking (Thought) → acting (Act) and observing (Observe)**. Let’s break down these actions together: 1. **Thought**: The LLM part of the Agent decides what the next step should be. 2. **Action:** The agent takes an action by calling the tools with the associated arguments. 3. **Observation:** The model reflects on the response from the tool. ## The Thought-Action-Observation Cycle The three components work together in a continuous loop. To use an analogy from programming, the agent uses a **while loop**: the loop continues until the objective of the agent has been fulfilled. Visually, it looks like this: <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/AgentCycle.gif" alt="Think, Act, Observe cycle"/> In many Agent frameworks, **the rules and guidelines are embedded directly into the system prompt**, ensuring that every cycle adheres to a defined logic. In a simplified version, our system prompt may look like this: <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/system_prompt_cycle.png" alt="Think, Act, Observe cycle"/> We see here that in the System Message we defined : - The *Agent's behavior*. - The *Tools our Agent has access to*, as we described in the previous section. - The *Thought-Action-Observation Cycle*, that we bake into the LLM instructions. Let’s take a small example to understand the process before going deeper into each step of the process. ## Alfred, the weather Agent We created Alfred, the Weather Agent. A user asks Alfred: “What’s the current weather in New York?” <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent.jpg" alt="Alfred Agent"/> Alfred’s job is to answer this query using a weather API tool. Here’s how the cycle unfolds: ### Thought **Internal Reasoning:** Upon receiving the query, Alfred’s internal dialogue might be: *"The user needs current weather information for New York. I have access to a tool that fetches weather data. First, I need to call the weather API to get up-to-date details."* This step shows the agent breaking the problem into steps: first, gathering the necessary data. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-1.jpg" alt="Alfred Agent"/> ### Action **Tool Usage:** Based on its reasoning and the fact that Alfred knows about a `get_weather` tool, Alfred prepares a JSON-formatted command that calls the weather API tool. For example, its first action could be: Thought: I need to check the current weather for New York. ``` { "action": "get_weather", "action_input": { "location": "New York" } } ``` Here, the action clearly specifies which tool to call (e.g., get_weather) and what parameter to pass (the “location": “New York”). <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-2.jpg" alt="Alfred Agent"/> ### Observation **Feedback from the Environment:** After the tool call, Alfred receives an observation. This might be the raw weather data from the API such as: *"Current weather in New York: partly cloudy, 15°C, 60% humidity."* <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-3.jpg" alt="Alfred Agent"/> This observation is then added to the prompt as additional context. It functions as real-world feedback, confirming whether the action succeeded and providing the needed details. ### Updated thought **Reflecting:** With the observation in hand, Alfred updates its internal reasoning: *"Now that I have the weather data for New York, I can compile an answer for the user."* <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-4.jpg" alt="Alfred Agent"/> ### Final Action Alfred then generates a final response formatted as we told it to: Thought: I have the weather data now. The current weather in New York is partly cloudy with a temperature of 15°C and 60% humidity." Final answer : The current weather in New York is partly cloudy with a temperature of 15°C and 60% humidity. This final action sends the answer back to the user, closing the loop. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/alfred-agent-5.jpg" alt="Alfred Agent"/> What we see in this example: - **Agents iterate through a loop until the objective is fulfilled:** **Alfred’s process is cyclical**. It starts with a thought, then acts by calling a tool, and finally observes the outcome. If the observation had indicated an error or incomplete data, Alfred could have re-entered the cycle to correct its approach. - **Tool Integration:** The ability to call a tool (like a weather API) enables Alfred to go **beyond static knowledge and retrieve real-time data**, an essential aspect of many AI Agents. - **Dynamic Adaptation:** Each cycle allows the agent to incorporate fresh information (observations) into its reasoning (thought), ensuring that the final answer is well-informed and accurate. This example showcases the core concept behind the *ReAct cycle* (a concept we're going to develop in the next section): **the interplay of Thought, Action, and Observation empowers AI agents to solve complex tasks iteratively**. By understanding and applying these principles, you can design agents that not only reason about their tasks but also **effectively utilize external tools to complete them**, all while continuously refining their output based on environmental feedback. --- Let’s now dive deeper into the Thought, Action, Observation as the individual steps of the process.
agents-course/units/en/unit1/agent-steps-and-structure.mdx/0
{ "file_path": "agents-course/units/en/unit1/agent-steps-and-structure.mdx", "repo_id": "agents-course", "token_count": 1797 }
1
# Conclusion Congratulations on finishing the `LangGraph` module of this second Unit! 🥳 You've now mastered the fundamentals of building structured workflows with LangGraph which you will be able to send to production. This module is just the beginning of your journey with LangGraph. For more advanced topics, we recommend: - Exploring the [official LangGraph documentation](https://github.com/langchain-ai/langgraph) - Taking the comprehensive [Introduction to LangGraph](https://academy.langchain.com/courses/intro-to-langgraph) course from LangChain Academy - Build something yourself ! In the next Unit, you'll now explore real use cases. It's time to leave theory to get into real action ! We would greatly appreciate **your thoughts on the course and suggestions for improvement**. If you have feedback, please 👉 [fill this form](https://docs.google.com/forms/d/e/1FAIpQLSe9VaONn0eglax0uTwi29rIn4tM7H2sYmmybmG5jJNlE5v0xA/viewform?usp=dialog) ### Keep Learning, Stay Awesome! 🤗 Good Sir/Madam! 🎩🦇 -Alfred-
agents-course/units/en/unit2/langgraph/conclusion.mdx/0
{ "file_path": "agents-course/units/en/unit2/langgraph/conclusion.mdx", "repo_id": "agents-course", "token_count": 299 }
2