code
stringlengths
66
870k
docstring
stringlengths
19
26.7k
func_name
stringlengths
1
138
language
stringclasses
1 value
repo
stringlengths
7
68
path
stringlengths
5
324
url
stringlengths
46
389
license
stringclasses
7 values
def get_current_gpu_memory_use(): """returns a list of VRAM allocations per GPU in MBs""" per_device_memory = [] for id in range(backend_device_count(torch_device)): with backend_torch_accelerator_module(torch_device).device(id): per_device_memory...
returns a list of VRAM allocations per GPU in MBs
get_current_gpu_memory_use
python
huggingface/transformers
tests/test_modeling_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_modeling_common.py
Apache-2.0
def test_attn_implementation_composite_models(self): """ Tests if composite models can receive a dict object as attn_implementation, where each key should be one of the sub-configs from the model's config. """ if not self.has_attentions: self.skipTest(reason="Model ar...
Tests if composite models can receive a dict object as attn_implementation, where each key should be one of the sub-configs from the model's config.
test_attn_implementation_composite_models
python
huggingface/transformers
tests/test_modeling_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_modeling_common.py
Apache-2.0
def test_sdpa_can_dispatch_non_composite_models(self): """ Tests if non-composite models dispatch correctly on SDPA/eager when requested so when loading the model. This tests only by looking at layer names, as usually SDPA layers are called "SDPAAttention". """ if not self.has_at...
Tests if non-composite models dispatch correctly on SDPA/eager when requested so when loading the model. This tests only by looking at layer names, as usually SDPA layers are called "SDPAAttention".
test_sdpa_can_dispatch_non_composite_models
python
huggingface/transformers
tests/test_modeling_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_modeling_common.py
Apache-2.0
def test_sdpa_can_dispatch_composite_models(self): """ Tests if composite models dispatch correctly on SDPA/eager when requested so when loading the model. This tests only by looking at layer names, as usually SDPA layers are called "SDPAAttention". In contrast to the above test, this on...
Tests if composite models dispatch correctly on SDPA/eager when requested so when loading the model. This tests only by looking at layer names, as usually SDPA layers are called "SDPAAttention". In contrast to the above test, this one checks if the "config._attn_implamentation" is a dict after ...
test_sdpa_can_dispatch_composite_models
python
huggingface/transformers
tests/test_modeling_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_modeling_common.py
Apache-2.0
def test_flash_attn_2_can_dispatch_composite_models(self): """ Tests if composite models can dispatch on FA2 if the sub-models support FA2. The tests is needed as we handle differently composite models and we cannot check them with above tests. If any of the sub-models does not support F...
Tests if composite models can dispatch on FA2 if the sub-models support FA2. The tests is needed as we handle differently composite models and we cannot check them with above tests. If any of the sub-models does not support FA2, we'll raise an error when dispatching that particular sub-...
test_flash_attn_2_can_dispatch_composite_models
python
huggingface/transformers
tests/test_modeling_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_modeling_common.py
Apache-2.0
def test_sliding_window_mask(self): """Tests that we can control the sliding window attention behavior of a model.""" config, inputs = self.model_tester.prepare_config_and_inputs_for_common() if not self.has_attentions: self.skipTest(reason="Model does not support output_attentions"...
Tests that we can control the sliding window attention behavior of a model.
test_sliding_window_mask
python
huggingface/transformers
tests/test_modeling_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_modeling_common.py
Apache-2.0
def test_torch_export(self, config=None, inputs_dict=None, tolerance=1e-4): """ Test if model can be exported with torch.export.export() Args: config (PretrainedConfig): Config to use for the model, if None, use default config from model_tester inputs_dic...
Test if model can be exported with torch.export.export() Args: config (PretrainedConfig): Config to use for the model, if None, use default config from model_tester inputs_dict (dict): Inputs to use for the model, if None, use default inputs from...
test_torch_export
python
huggingface/transformers
tests/test_modeling_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_modeling_common.py
Apache-2.0
def test_generation_tester_mixin_inheritance(self): """ Ensures that we have the generation tester mixin if the model can generate. The test will fail otherwise, forcing the mixin to be added -- and ensuring proper test coverage """ if len(self.all_generative_model_classes) > 0: ...
Ensures that we have the generation tester mixin if the model can generate. The test will fail otherwise, forcing the mixin to be added -- and ensuring proper test coverage
test_generation_tester_mixin_inheritance
python
huggingface/transformers
tests/test_modeling_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_modeling_common.py
Apache-2.0
def ids_tensor(shape, vocab_size, rng=None): """Creates a random int32 tensor of the shape within the vocab size.""" if rng is None: rng = random.Random() total_dims = 1 for dim in shape: total_dims *= dim values = [] for _ in range(total_dims): values.append(rng.randin...
Creates a random int32 tensor of the shape within the vocab size.
ids_tensor
python
huggingface/transformers
tests/test_modeling_flax_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_modeling_flax_common.py
Apache-2.0
def get_params(params, from_head_prefix=None): """Function extracts relevant parameters into flatten dict from model params, appends batch normalization statistics if present""" # If Both parameters and batch normalization statistics are present if "batch_stats" in params: # Extract only parame...
Function extracts relevant parameters into flatten dict from model params, appends batch normalization statistics if present
get_params
python
huggingface/transformers
tests/test_modeling_flax_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_modeling_flax_common.py
Apache-2.0
def _make_attention_mask_non_null(self, inputs_dict): """Make sure no sequence has all zeros as attention mask""" for k in ["attention_mask", "encoder_attention_mask", "decoder_attention_mask"]: if k in inputs_dict: attention_mask = inputs_dict[k] # Make sur...
Make sure no sequence has all zeros as attention mask
_make_attention_mask_non_null
python
huggingface/transformers
tests/test_modeling_tf_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_common.py
Apache-2.0
def _postprocessing_to_ignore_test_cases(self, tf_outputs, pt_outputs, model_class): """For temporarily ignoring some failed test cases (issues to be fixed)""" tf_keys = {k for k, v in tf_outputs.items() if v is not None} pt_keys = {k for k, v in pt_outputs.items() if v is not None} ke...
For temporarily ignoring some failed test cases (issues to be fixed)
_postprocessing_to_ignore_test_cases
python
huggingface/transformers
tests/test_modeling_tf_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_common.py
Apache-2.0
def ids_tensor(shape, vocab_size, rng=None, name=None, dtype=None): """Creates a random int32 tensor of the shape within the vocab size.""" if rng is None: rng = random.Random() total_dims = 1 for dim in shape: total_dims *= dim values = [] for _ in range(total_dims): v...
Creates a random int32 tensor of the shape within the vocab size.
ids_tensor
python
huggingface/transformers
tests/test_modeling_tf_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_common.py
Apache-2.0
def run_task_tests(self, task, torch_dtype="float32"): """Run pipeline tests for a specific `task` Args: task (`str`): A task name. This should be a key in the mapping `pipeline_test_mapping`. torch_dtype (`str`, `optional`, defaults to `'float32'`): ...
Run pipeline tests for a specific `task` Args: task (`str`): A task name. This should be a key in the mapping `pipeline_test_mapping`. torch_dtype (`str`, `optional`, defaults to `'float32'`): The torch dtype to use for the model. Can be used for FP16/oth...
run_task_tests
python
huggingface/transformers
tests/test_pipeline_mixin.py
https://github.com/huggingface/transformers/blob/master/tests/test_pipeline_mixin.py
Apache-2.0
def is_pipeline_test_to_skip( self, pipeline_test_case_name, config_class, model_architecture, tokenizer_name, image_processor_name, feature_extractor_name, processor_name, ): """Skip some tests based on the classes or their names without the i...
Skip some tests based on the classes or their names without the instantiated objects. This is to avoid calling `from_pretrained` (so reducing the runtime) if we already know the tests will fail.
is_pipeline_test_to_skip
python
huggingface/transformers
tests/test_pipeline_mixin.py
https://github.com/huggingface/transformers/blob/master/tests/test_pipeline_mixin.py
Apache-2.0
def is_pipeline_test_to_skip_more( self, pipeline_test_case_name, config, model, tokenizer, image_processor=None, feature_extractor=None, processor=None, ): # noqa """Skip some more tests based on the information from the instantiated objects....
Skip some more tests based on the information from the instantiated objects.
is_pipeline_test_to_skip_more
python
huggingface/transformers
tests/test_pipeline_mixin.py
https://github.com/huggingface/transformers/blob/master/tests/test_pipeline_mixin.py
Apache-2.0
def compare_pipeline_args_to_hub_spec(pipeline_class, hub_spec): """ Compares the docstring of a pipeline class to the fields of the matching Hub input signature class to ensure that they match. This guarantees that Transformers pipelines can be used in inference without needing to manually refactor or ...
Compares the docstring of a pipeline class to the fields of the matching Hub input signature class to ensure that they match. This guarantees that Transformers pipelines can be used in inference without needing to manually refactor or rename inputs.
compare_pipeline_args_to_hub_spec
python
huggingface/transformers
tests/test_pipeline_mixin.py
https://github.com/huggingface/transformers/blob/master/tests/test_pipeline_mixin.py
Apache-2.0
def prepare_image_inputs(): """This function prepares a list of PIL images""" image_inputs = [np.random.randint(255, size=(3, 30, 400), dtype=np.uint8)] image_inputs = [Image.fromarray(np.moveaxis(x, 0, -1)) for x in image_inputs] return image_inputs
This function prepares a list of PIL images
prepare_image_inputs
python
huggingface/transformers
tests/test_processing_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_processing_common.py
Apache-2.0
def prepare_image_inputs(self, batch_size: Optional[int] = None): """This function prepares a list of PIL images for testing""" if batch_size is None: return prepare_image_inputs()[0] if batch_size < 1: raise ValueError("batch_size must be greater than 0") return ...
This function prepares a list of PIL images for testing
prepare_image_inputs
python
huggingface/transformers
tests/test_processing_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_processing_common.py
Apache-2.0
def prepare_video_inputs(self, batch_size: Optional[int] = None): """This function prepares a list of numpy videos.""" video_input = [np.random.randint(255, size=(3, 30, 400), dtype=np.uint8)] * 8 if batch_size is None: return video_input return [video_input] * batch_size
This function prepares a list of numpy videos.
prepare_video_inputs
python
huggingface/transformers
tests/test_processing_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_processing_common.py
Apache-2.0
def test_image_processor_defaults_preserved_by_image_kwargs(self): """ We use do_rescale=True, rescale_factor=-1 to ensure that image_processor kwargs are preserved in the processor. We then check that the mean of the pixel_values is less than or equal to 0 after processing. Since the or...
We use do_rescale=True, rescale_factor=-1 to ensure that image_processor kwargs are preserved in the processor. We then check that the mean of the pixel_values is less than or equal to 0 after processing. Since the original pixel_values are in [0, 255], this is a good indicator that the rescale...
test_image_processor_defaults_preserved_by_image_kwargs
python
huggingface/transformers
tests/test_processing_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_processing_common.py
Apache-2.0
def test_video_processor_defaults_preserved_by_video_kwargs(self): """ We use do_rescale=True, rescale_factor=-1 to ensure that image_processor kwargs are preserved in the processor. We then check that the mean of the pixel_values is less than or equal to 0 after processing. Since the or...
We use do_rescale=True, rescale_factor=-1 to ensure that image_processor kwargs are preserved in the processor. We then check that the mean of the pixel_values is less than or equal to 0 after processing. Since the original pixel_values are in [0, 255], this is a good indicator that the rescale...
test_video_processor_defaults_preserved_by_video_kwargs
python
huggingface/transformers
tests/test_processing_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_processing_common.py
Apache-2.0
def test_overlapping_text_audio_kwargs_handling(self): """ Checks that `padding`, or any other overlap arg between audio extractor and tokenizer is be passed to only text and ignored for audio for BC purposes """ if "feature_extractor" not in self.processor_class.attributes: ...
Checks that `padding`, or any other overlap arg between audio extractor and tokenizer is be passed to only text and ignored for audio for BC purposes
test_overlapping_text_audio_kwargs_handling
python
huggingface/transformers
tests/test_processing_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_processing_common.py
Apache-2.0
def test_apply_chat_template_video_special_processing(self): """ Tests that models can use their own preprocessing to preprocess conversations. """ processor = self.get_processor() if processor.chat_template is None: self.skipTest("Processor has no chat template") ...
Tests that models can use their own preprocessing to preprocess conversations.
test_apply_chat_template_video_special_processing
python
huggingface/transformers
tests/test_processing_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_processing_common.py
Apache-2.0
def check_subword_sampling( tokenizer: PreTrainedTokenizer, text: Optional[str] = None, test_sentencepiece_ignore_case: bool = True, ) -> None: """ Check if the tokenizer generates different results when subword regularization is enabled. Subword regularization augments training data with subwo...
Check if the tokenizer generates different results when subword regularization is enabled. Subword regularization augments training data with subword sampling. This has a random component. Args: tokenizer: The tokenizer to check. text: The text to use for the checks. test_sent...
check_subword_sampling
python
huggingface/transformers
tests/test_tokenization_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py
Apache-2.0
def tokenizer_integration_test_util( self, expected_encoding: dict, model_name: str, revision: Optional[str] = None, sequences: Optional[list[str]] = None, decode_kwargs: Optional[dict[str, Any]] = None, padding: bool = True, ): """ Util for in...
Util for integration test. Text is tokenized and then reverted back to text. Both results are then checked. Args: expected_encoding: The expected result of the tokenizer output. model_name: The model name of the tokenizer to load and use...
tokenizer_integration_test_util
python
huggingface/transformers
tests/test_tokenization_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py
Apache-2.0
def test_pickle_tokenizer(self): """Google pickle __getstate__ __setstate__ if you are struggling with this.""" tokenizers = self.get_tokenizers() for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): self.assertIsNotNone(tokenizer) ...
Google pickle __getstate__ __setstate__ if you are struggling with this.
test_pickle_tokenizer
python
huggingface/transformers
tests/test_tokenization_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py
Apache-2.0
def test_continue_final_message_with_trim(self): """Regression test for chat templates with trimming: https://github.com/huggingface/transformers/pull/34214""" dummy_template = """ {%- for message in messages %} {{- "<|im_start|>" + message['role'] + "\n" + message['content'] | trim...
Regression test for chat templates with trimming: https://github.com/huggingface/transformers/pull/34214
test_continue_final_message_with_trim
python
huggingface/transformers
tests/test_tokenization_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py
Apache-2.0
def test_continue_final_message_with_decoy_earlier_message(self): """Regression test for chat templates where an earlier message has similar content to the final message https://github.com/huggingface/transformers/issues/35433""" dummy_template = """ {%- for message in messages %} ...
Regression test for chat templates where an earlier message has similar content to the final message https://github.com/huggingface/transformers/issues/35433
test_continue_final_message_with_decoy_earlier_message
python
huggingface/transformers
tests/test_tokenization_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py
Apache-2.0
def test_encode_plus_with_padding(self, use_padding_as_call_kwarg: bool): """ This test checks that padding works as expected when tokenizing a sequence. Padding is expected to have no effect when the input is a single sequence and the padding-strategy is not `max_length`. Otherwise it p...
This test checks that padding works as expected when tokenizing a sequence. Padding is expected to have no effect when the input is a single sequence and the padding-strategy is not `max_length`. Otherwise it pads to the specified max-length using tokenizer classes `padding_side` attrib...
test_encode_plus_with_padding
python
huggingface/transformers
tests/test_tokenization_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py
Apache-2.0
def test_batch_encode_dynamic_overflowing(self): """ When calling batch_encode with multiple sequence it can returns different number of overflowing encoding for each sequence: [ Sequence 1: [Encoding 1, Encoding 2], Sequence 2: [Encoding 1], Sequence 3: [En...
When calling batch_encode with multiple sequence it can returns different number of overflowing encoding for each sequence: [ Sequence 1: [Encoding 1, Encoding 2], Sequence 2: [Encoding 1], Sequence 3: [Encoding 1, Encoding 2, ... Encoding N] ] This...
test_batch_encode_dynamic_overflowing
python
huggingface/transformers
tests/test_tokenization_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py
Apache-2.0
def test_custom_output_dir(self): """Test that output_dir is respected when specified.""" with tempfile.TemporaryDirectory() as tmp_dir: args = TrainingArguments(output_dir=tmp_dir) self.assertEqual(args.output_dir, tmp_dir)
Test that output_dir is respected when specified.
test_custom_output_dir
python
huggingface/transformers
tests/test_training_args.py
https://github.com/huggingface/transformers/blob/master/tests/test_training_args.py
Apache-2.0
def test_output_dir_creation(self): """Test that output_dir is created only when needed.""" with tempfile.TemporaryDirectory() as tmp_dir: output_dir = os.path.join(tmp_dir, "test_output") # Directory should not exist before creating args self.assertFalse(os.path.exi...
Test that output_dir is created only when needed.
test_output_dir_creation
python
huggingface/transformers
tests/test_training_args.py
https://github.com/huggingface/transformers/blob/master/tests/test_training_args.py
Apache-2.0
def test_torch_empty_cache_steps_requirements(self): """Test that torch_empty_cache_steps is a positive integer or None.""" # None is acceptable (feature is disabled): args = TrainingArguments(torch_empty_cache_steps=None) self.assertIsNone(args.torch_empty_cache_steps) # non-i...
Test that torch_empty_cache_steps is a positive integer or None.
test_torch_empty_cache_steps_requirements
python
huggingface/transformers
tests/test_training_args.py
https://github.com/huggingface/transformers/blob/master/tests/test_training_args.py
Apache-2.0
def prepare_video(num_frames, num_channels, width=10, height=10, return_tensors="pil"): """This function prepares a video as a list of PIL images/NumPy arrays/PyTorch tensors.""" video = [] for i in range(num_frames): video.append(np.random.randint(255, size=(width, height, num_channels), dtype=np....
This function prepares a video as a list of PIL images/NumPy arrays/PyTorch tensors.
prepare_video
python
huggingface/transformers
tests/test_video_processing_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_video_processing_common.py
Apache-2.0
def prepare_video_inputs( batch_size, num_frames, num_channels, min_resolution, max_resolution, equal_resolution=False, return_tensors="pil", ): """This function prepares a batch of videos: a list of list of PIL images, or a list of list of numpy arrays if one specifies return_tensor...
This function prepares a batch of videos: a list of list of PIL images, or a list of list of numpy arrays if one specifies return_tensors="np", or a list of list of PyTorch tensors if one specifies return_tensors="torch". One can specify whether the videos are of the same resolution or not.
prepare_video_inputs
python
huggingface/transformers
tests/test_video_processing_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_video_processing_common.py
Apache-2.0
def test_nested_input(self): """Tests that the processor can work with nested list where each video is a list of arrays""" for video_processing_class in self.video_processor_list: video_processing = video_processing_class(**self.video_processor_dict) video_inputs = self.video_pro...
Tests that the processor can work with nested list where each video is a list of arrays
test_nested_input
python
huggingface/transformers
tests/test_video_processing_common.py
https://github.com/huggingface/transformers/blob/master/tests/test_video_processing_common.py
Apache-2.0
def test_transform_and_reverse(self): r""" Classic tests to simply check if the conversion has been successful. """ model_id = "hf-internal-testing/tiny-random-t5" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForSeq2SeqLM.from_pretrained(model_id) ...
Classic tests to simply check if the conversion has been successful.
test_transform_and_reverse
python
huggingface/transformers
tests/bettertransformer/test_integration.py
https://github.com/huggingface/transformers/blob/master/tests/bettertransformer/test_integration.py
Apache-2.0
def test_error_save_pretrained(self): r""" The save_pretrained method should raise a ValueError if the model is in BetterTransformer mode. All should be good if the model is reversed. """ model_id = "hf-internal-testing/tiny-random-t5" model = AutoModelForSeq2SeqLM.from_p...
The save_pretrained method should raise a ValueError if the model is in BetterTransformer mode. All should be good if the model is reversed.
test_error_save_pretrained
python
huggingface/transformers
tests/bettertransformer/test_integration.py
https://github.com/huggingface/transformers/blob/master/tests/bettertransformer/test_integration.py
Apache-2.0
def get_master_port(real_launcher=False): """ When using a single gpu launcher emulation (i.e. not deepspeed or python -m torch.distributed) the issue is that once the port is tied it can't be used anywhere else outside of this process, since torch.dist doesn't free the port until the process exits. The...
When using a single gpu launcher emulation (i.e. not deepspeed or python -m torch.distributed) the issue is that once the port is tied it can't be used anywhere else outside of this process, since torch.dist doesn't free the port until the process exits. Therefore for the sake of being able to run both...
get_master_port
python
huggingface/transformers
tests/deepspeed/test_deepspeed.py
https://github.com/huggingface/transformers/blob/master/tests/deepspeed/test_deepspeed.py
Apache-2.0
def require_deepspeed_aio(test_case): """ Decorator marking a test that requires deepspeed aio (nvme) """ if not is_deepspeed_available(): return unittest.skip(reason="test requires deepspeed")(test_case) import deepspeed from deepspeed.ops.aio import AsyncIOBuilder if not deepspee...
Decorator marking a test that requires deepspeed aio (nvme)
require_deepspeed_aio
python
huggingface/transformers
tests/deepspeed/test_deepspeed.py
https://github.com/huggingface/transformers/blob/master/tests/deepspeed/test_deepspeed.py
Apache-2.0
def get_master_port(real_launcher=False): """ When using a single gpu launcher emulation (i.e. not deepspeed or python -m torch.distributed) the issue is that once the port is tied it can't be used anywhere else outside of this process, since torch.dist doesn't free the port until the process exits. The...
When using a single gpu launcher emulation (i.e. not deepspeed or python -m torch.distributed) the issue is that once the port is tied it can't be used anywhere else outside of this process, since torch.dist doesn't free the port until the process exits. Therefore for the sake of being able to run both...
get_master_port
python
huggingface/transformers
tests/fsdp/test_fsdp.py
https://github.com/huggingface/transformers/blob/master/tests/fsdp/test_fsdp.py
Apache-2.0
def test_get_assistant_to_target_input_ids(self): """Test the mapping from assistant tokens to target tokens.""" expected_mapping = [0, 1, 2, self.translator.SUPPRESS_TOKEN_ID, self.translator.SUPPRESS_TOKEN_ID] actual_mapping = self.translator._assistant_to_target_input_ids.tolist() sel...
Test the mapping from assistant tokens to target tokens.
test_get_assistant_to_target_input_ids
python
huggingface/transformers
tests/generation/test_candidate_generator.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_candidate_generator.py
Apache-2.0
def test_get_suppress_input_ids(self): """Test the suppression of assistant input IDs not present in the target vocabulary.""" expected_suppress_ids = [3, 4] actual_suppress_ids = self.translator._get_suppress_input_ids().tolist() self.assertEqual(actual_suppress_ids, expected_suppress_i...
Test the suppression of assistant input IDs not present in the target vocabulary.
test_get_suppress_input_ids
python
huggingface/transformers
tests/generation/test_candidate_generator.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_candidate_generator.py
Apache-2.0
def test_get_target_ids(self): """Test the translation of assistant candidate IDs to target candidate IDs.""" assistant_input_ids = torch.LongTensor([[0, 1, 2]]).to( self.assistant_model.device ) # 'hello world foo' in assistant tokenizer target_input_ids = torch.LongTensor(...
Test the translation of assistant candidate IDs to target candidate IDs.
test_get_target_ids
python
huggingface/transformers
tests/generation/test_candidate_generator.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_candidate_generator.py
Apache-2.0
def test_get_target_logits(self): """Test the conversion of assistant logits to target logits.""" # Assistant logits for IDs 0, 1, 2 assistant_logits = torch.FloatTensor([[[0.1, 0.2, 0.3, 0.4, self.translator.FILTER_VALUE]]]).to( self.assistant_model.device ) # Shape (1, 1, ...
Test the conversion of assistant logits to target logits.
test_get_target_logits
python
huggingface/transformers
tests/generation/test_candidate_generator.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_candidate_generator.py
Apache-2.0
def test_same_instance_for_same_tokenizers(self): """Test that the same translator is returned for the same tokenizers.""" translator1 = AssistantVocabTranslatorCache.get_translator( self.target_tokenizer, self.assistant_tokenizer, target_vocab_size=self.target_vocab_...
Test that the same translator is returned for the same tokenizers.
test_same_instance_for_same_tokenizers
python
huggingface/transformers
tests/generation/test_candidate_generator.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_candidate_generator.py
Apache-2.0
def test_different_instances_for_different_tokenizers(self): """Test that different tokenizers produce different translators.""" translator1 = AssistantVocabTranslatorCache.get_translator( self.target_tokenizer, self.assistant_tokenizer, target_vocab_size=self.target_...
Test that different tokenizers produce different translators.
test_different_instances_for_different_tokenizers
python
huggingface/transformers
tests/generation/test_candidate_generator.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_candidate_generator.py
Apache-2.0
def test_cache_with_weakref_key(self): """Ensure that the cache uses weak references as keys.""" initial_cache_size = len(AssistantVocabTranslatorCache._cache) target_tokenizer = MockTokenizer({"hello": 0}) assistant_tokenizer = MockTokenizer({"hello": 0}) # Store translator in ...
Ensure that the cache uses weak references as keys.
test_cache_with_weakref_key
python
huggingface/transformers
tests/generation/test_candidate_generator.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_candidate_generator.py
Apache-2.0
def test_weakref_cache_cleanup(self): """Test that the cache cleans up translators when tokenizers are garbage collected.""" def create_translator(): target_tokenizer = MockTokenizer({"hello": 0}) assistant_tokenizer = MockTokenizer({"hello": 0}) translator = Assista...
Test that the cache cleans up translators when tokenizers are garbage collected.
test_weakref_cache_cleanup
python
huggingface/transformers
tests/generation/test_candidate_generator.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_candidate_generator.py
Apache-2.0
def test_mismatched_vocabularies(self): """Test handling of mismatched vocabularies between models""" # Create input with tokens present in main but not assistant vocab # Find a token that is not in the assistant tokenizer but in # the main tokenizer. missing_token = next( ...
Test handling of mismatched vocabularies between models
test_mismatched_vocabularies
python
huggingface/transformers
tests/generation/test_candidate_generator.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_candidate_generator.py
Apache-2.0
def test_device_consistency(self): """Test handling of inputs on different devices""" input_ids = torch.tensor([[1, 2, 3]]).to(torch_device) self.generator.input_ids = input_ids candidates, _ = self.generator.get_candidates(input_ids) self.assertEqual(candidates.device, input_ids...
Test handling of inputs on different devices
test_device_consistency
python
huggingface/transformers
tests/generation/test_candidate_generator.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_candidate_generator.py
Apache-2.0
def test_usd_vs_vanilla_sampling(cls): """Test that USD matches vanilla sampling with temperature set to nearly 0""" prompt = "Test text" pipe_vanilla = pipeline( "text-generation", model=cls.target_name, ) pipe_vanilla_output = pipe_vanilla(prompt, max_n...
Test that USD matches vanilla sampling with temperature set to nearly 0
test_usd_vs_vanilla_sampling
python
huggingface/transformers
tests/generation/test_candidate_generator.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_candidate_generator.py
Apache-2.0
def test_kwarg_init(self): """Tests that we can overwrite attributes at `from_pretrained` time.""" default_config = GenerationConfig() self.assertEqual(default_config.temperature, 1.0) self.assertEqual(default_config.do_sample, False) self.assertEqual(default_config.num_beams, 1)...
Tests that we can overwrite attributes at `from_pretrained` time.
test_kwarg_init
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_validate(self): """ Tests that the `validate` method is working as expected. Note that `validate` is called at initialization time """ logger = transformers_logging.get_logger("transformers.generation.configuration_utils") # A correct configuration will not throw any wa...
Tests that the `validate` method is working as expected. Note that `validate` is called at initialization time
test_validate
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_refuse_to_save(self): """Tests that we refuse to save a generation config that fails validation.""" # setting the temperature alone is invalid, as we also need to set do_sample to True -> throws a warning that # is caught, doesn't save, and raises an exception config = Generati...
Tests that we refuse to save a generation config that fails validation.
test_refuse_to_save
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_generation_mode(self): """Tests that the `get_generation_mode` method is working as expected.""" config = GenerationConfig() self.assertEqual(config.get_generation_mode(), GenerationMode.GREEDY_SEARCH) config = GenerationConfig(do_sample=True) self.assertEqual(config.ge...
Tests that the `get_generation_mode` method is working as expected.
test_generation_mode
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_static_cache_without_cache_config(self): """Regression test for #35026 -- static cache should work without a cache config.""" config = GenerationConfig(cache_implementation="static") self.assertEqual(config.cache_implementation, "static") self.assertEqual(config.cache_config, No...
Regression test for #35026 -- static cache should work without a cache config.
test_static_cache_without_cache_config
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_sequence_bias(self): """Tests that GenerationConfig is serialized and SequenceBiasLogitsProcessor is initialized with sequence_bias parameter""" generation_config = GenerationConfig() sequence_bias = [[[45, 67], -0.6], [[89], 1.2]] generation_config.sequence...
Tests that GenerationConfig is serialized and SequenceBiasLogitsProcessor is initialized with sequence_bias parameter
test_serialize_generation_sequence_bias
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_min_length_eos_token(self): """Tests that GenerationConfig is serialized and MinLengthLogitsProcessor is initialized with min_length and eos_token_id""" eos_token_id = 0 min_length = 10 generation_config = GenerationConfig(min_length=min_length, eos_token_i...
Tests that GenerationConfig is serialized and MinLengthLogitsProcessor is initialized with min_length and eos_token_id
test_serialize_generation_min_length_eos_token
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_min_new_tokens(self): """Tests that GenerationConfig is serialized and MinNewTokensLengthLogitsProcessor is initialized with min_new_tokens""" eos_token_id = 0 min_new_tokens = 5 prompt_length_to_skip = 2 generation_config = GenerationConfig(min_new...
Tests that GenerationConfig is serialized and MinNewTokensLengthLogitsProcessor is initialized with min_new_tokens
test_serialize_generation_min_new_tokens
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_temperature(self): """Tests that GenerationConfig is serialized and TemperatureLogitsWarper is initialized with temperature""" temperature = 2.0 generation_config = GenerationConfig(temperature=temperature, do_sample=True) with tempfile.TemporaryDirectory("...
Tests that GenerationConfig is serialized and TemperatureLogitsWarper is initialized with temperature
test_serialize_generation_temperature
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_repetition_penalty(self): """Tests that GenerationConfig is serialized and RepetitionPenaltyLogitsProcessor is initialized with repetition_penalty""" penalty = 2.0 generation_config = GenerationConfig(repetition_penalty=penalty) with tempfile.TemporaryDirec...
Tests that GenerationConfig is serialized and RepetitionPenaltyLogitsProcessor is initialized with repetition_penalty
test_serialize_generation_repetition_penalty
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_encoder_repetition_penalty(self): """Tests that GenerationConfig is serialized and EncoderRepetitionPenaltyLogitsProcessor is initialized with penalty and input_ids""" penalty = 2.0 input_ids = torch.tensor([[0, 1], [5, 0]], device=torch_device, dtype=torch.long) ...
Tests that GenerationConfig is serialized and EncoderRepetitionPenaltyLogitsProcessor is initialized with penalty and input_ids
test_serialize_generation_encoder_repetition_penalty
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_top_p(self): """Tests that GenerationConfig is serialized and TopPLogitsWarper is initialized with top_p""" top_p = 0.8 generation_config = GenerationConfig(top_p=top_p, do_sample=True) with tempfile.TemporaryDirectory("test-generation-config") as tmp_dir: ...
Tests that GenerationConfig is serialized and TopPLogitsWarper is initialized with top_p
test_serialize_generation_top_p
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_top_k(self): """Tests that GenerationConfig is serialized and TopKLogitsWarper is initialized with top_k""" top_k = 2 generation_config = GenerationConfig(top_k=top_k, do_sample=True) with tempfile.TemporaryDirectory("test-generation-config") as tmp_dir: ...
Tests that GenerationConfig is serialized and TopKLogitsWarper is initialized with top_k
test_serialize_generation_top_k
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_min_p(self): """Tests that GenerationConfig is serialized and MinPLogitsWarper is initialized with min_p""" min_p = 0.8 generation_config = GenerationConfig(min_p=min_p, do_sample=True) with tempfile.TemporaryDirectory("test-generation-config") as tmp_dir: ...
Tests that GenerationConfig is serialized and MinPLogitsWarper is initialized with min_p
test_serialize_generation_min_p
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_typical_p(self): """Tests that GenerationConfig is serialized and TypicalLogitsWarper is initialized with mass""" mass = 0.8 generation_config = GenerationConfig(typical_p=mass, do_sample=True) with tempfile.TemporaryDirectory("test-generation-config") as t...
Tests that GenerationConfig is serialized and TypicalLogitsWarper is initialized with mass
test_serialize_generation_typical_p
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_epsilon_cutoff(self): """Tests that GenerationConfig is serialized and EpsilonLogitsWarper is initialized with epsilon""" epsilon = 0.8 generation_config = GenerationConfig(epsilon_cutoff=epsilon, do_sample=True) with tempfile.TemporaryDirectory("test-gener...
Tests that GenerationConfig is serialized and EpsilonLogitsWarper is initialized with epsilon
test_serialize_generation_epsilon_cutoff
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_eta_cutoff(self): """Tests that GenerationConfig is serialized and EtaLogitsWarper is initialized with epsilon""" epsilon = 0.8 generation_config = GenerationConfig(eta_cutoff=epsilon, do_sample=True) with tempfile.TemporaryDirectory("test-generation-config...
Tests that GenerationConfig is serialized and EtaLogitsWarper is initialized with epsilon
test_serialize_generation_eta_cutoff
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_ngram_size(self): """Tests that GenerationConfig is serialized and NoRepeatNGramLogitsProcessor is initialized with ngram_size""" ngram_size = 2 generation_config = GenerationConfig(no_repeat_ngram_size=ngram_size, do_sample=True) with tempfile.TemporaryDir...
Tests that GenerationConfig is serialized and NoRepeatNGramLogitsProcessor is initialized with ngram_size
test_serialize_generation_ngram_size
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_encoder_ngram_size(self): """Tests that GenerationConfig is serialized and EncoderNoRepeatNGramLogitsProcessor is initialized with ngram_size""" ngram_size = 2 input_ids = torch.tensor([[0, 1], [5, 0]], device=torch_device, dtype=torch.long) generation_conf...
Tests that GenerationConfig is serialized and EncoderNoRepeatNGramLogitsProcessor is initialized with ngram_size
test_serialize_generation_encoder_ngram_size
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_bad_words_ids(self): """Tests that GenerationConfig is serialized and NoBadWordsLogitsProcessor is initialized with bad_words_ids""" bad_word_tokens = [[1], [4], [1, 0], [0, 1, 2], [1, 3, 1, 3]] generation_config = GenerationConfig(bad_words_ids=bad_word_tokens) ...
Tests that GenerationConfig is serialized and NoBadWordsLogitsProcessor is initialized with bad_words_ids
test_serialize_generation_bad_words_ids
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_num_beams(self): """Tests that GenerationConfig is serialized and PrefixConstrainedLogitsProcessor is initialized with num_beams""" num_beams = 1 def prefix_allowed_tokens_fn(batch_id, inputs_ids): return [[0, 1], [2, 3]][batch_id] generation_c...
Tests that GenerationConfig is serialized and PrefixConstrainedLogitsProcessor is initialized with num_beams
test_serialize_generation_num_beams
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_diversity_penalty_and_num_bean_groups(self): """Tests that GenerationConfig is serialized and HammingDiversityLogitsProcessor is initialized with diversity_penalty_and_num_bean_groups""" num_beams = 2 num_beam_groups = 2 diversity_penalty = 1.0 gene...
Tests that GenerationConfig is serialized and HammingDiversityLogitsProcessor is initialized with diversity_penalty_and_num_bean_groups
test_serialize_generation_diversity_penalty_and_num_bean_groups
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_bos_token_id(self): """Tests that GenerationConfig is serialized and ForcedBOSTokenLogitsProcessor is initialized with bos_token_id""" bos_token_id = 0 generation_config = GenerationConfig(bos_token_id=bos_token_id) with tempfile.TemporaryDirectory("test-ge...
Tests that GenerationConfig is serialized and ForcedBOSTokenLogitsProcessor is initialized with bos_token_id
test_serialize_generation_bos_token_id
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_eos_token_id(self): """Tests that GenerationConfig is serialized and ForcedEOSTokenLogitsProcessor is initialized with eos_token_id""" eos_token_id = 0 max_length = 5 generation_config = GenerationConfig(eos_token_id=eos_token_id) with tempfile.Temp...
Tests that GenerationConfig is serialized and ForcedEOSTokenLogitsProcessor is initialized with eos_token_id
test_serialize_generation_eos_token_id
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_exponential_decay_length_penalty(self): """Tests that GenerationConfig is serialized and ExponentialDecayLengthPenalty is initialized with regulation_start and regulation_factor""" eos_token_id = 0 penalty_start = 5 penalty_factor = 1.1 input_ids_seq...
Tests that GenerationConfig is serialized and ExponentialDecayLengthPenalty is initialized with regulation_start and regulation_factor
test_serialize_generation_exponential_decay_length_penalty
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_begin_suppress_tokens(self): """Tests that GenerationConfig is serialized and SuppressTokensAtBeginLogitsProcessor is initialized with begin_suppress_token and begin_index""" begin_suppress_tokens = [220, 50256] begin_index = 0 generation_config = Generatio...
Tests that GenerationConfig is serialized and SuppressTokensAtBeginLogitsProcessor is initialized with begin_suppress_token and begin_index
test_serialize_generation_begin_suppress_tokens
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_suppress_tokens(self): """Tests that GenerationConfig is serialized and SuppressTokensLogitsProcessor is initialized with suppress_token""" suppress_tokens = [220, 50256] generation_config = GenerationConfig(suppress_tokens=suppress_tokens) with tempfile.Te...
Tests that GenerationConfig is serialized and SuppressTokensLogitsProcessor is initialized with suppress_token
test_serialize_generation_suppress_tokens
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_guidance_scale(self): """Tests that GenerationConfig is serialized and ClassifierFreeGuidanceLogitsProcessor is initialized with guidance_scale""" guidance_scale = 2.0 generation_config = GenerationConfig(guidance_scale=guidance_scale) with tempfile.Temporar...
Tests that GenerationConfig is serialized and ClassifierFreeGuidanceLogitsProcessor is initialized with guidance_scale
test_serialize_generation_guidance_scale
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_guidance_scale_unbatched(self): """Tests that GenerationConfig is serialized and UnbatchedClassifierFreeGuidanceLogitsProcessor is initialized with guidance_scale""" guidance_scale = 2.0 input_ids = torch.LongTensor([[0]]) generation_config = GenerationCon...
Tests that GenerationConfig is serialized and UnbatchedClassifierFreeGuidanceLogitsProcessor is initialized with guidance_scale
test_serialize_generation_guidance_scale_unbatched
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def test_serialize_generation_watermarking_config(self): """Tests that GenerationConfig is serialized and WatermarkLogitsProcessor is initialized with WatermarkingConfig parameters""" vocab_size = 20 bias = 2.0 greenlist_ratio = 0.5 hashing_key = 10 seeding_scheme = "lef...
Tests that GenerationConfig is serialized and WatermarkLogitsProcessor is initialized with WatermarkingConfig parameters
test_serialize_generation_watermarking_config
python
huggingface/transformers
tests/generation/test_configuration_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_configuration_utils.py
Apache-2.0
def manage_process_group(func: Callable[..., Any]) -> Callable[..., Any]: """Manage the creation and destruction of the distributed process group for the wrapped function.""" def wrapped(*args: Any, **kwargs: Any) -> Any: device_count = backend_device_count(torch_device) torch.d...
Manage the creation and destruction of the distributed process group for the wrapped function.
manage_process_group
python
huggingface/transformers
tests/generation/test_fsdp.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_fsdp.py
Apache-2.0
def test_synthidtext_watermark_processor_distributional_convergence(self, vocab_size, logits_type): """Check if watermarked distribution converges to unwatermarked logits distribution.""" batch_size = 1500 num_keys = 1000 updated_softmaxes = 0 np.random.seed(0) torch.man...
Check if watermarked distribution converges to unwatermarked logits distribution.
test_synthidtext_watermark_processor_distributional_convergence
python
huggingface/transformers
tests/generation/test_logits_process.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_logits_process.py
Apache-2.0
def test_synthidtext_watermark_processor_bias_test(self, vocab_size, ngram_len, num_layers, atol): """Test SynthID watermarking bias matches theoretical value.""" batch_size = 20000 generator = torch.Generator(device=torch_device).manual_seed(0) np.random.seed(0) keys = [np.rand...
Test SynthID watermarking bias matches theoretical value.
test_synthidtext_watermark_processor_bias_test
python
huggingface/transformers
tests/generation/test_logits_process.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_logits_process.py
Apache-2.0
def test_stop_string_criteria_vocab_size_mismatch(self): """Test that StopStringCriteria handles tokens above len(tokenizer) correctly.""" tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2") # Create input_ids with tokens above len(tokenizer) input_ids = torch.tensor([[le...
Test that StopStringCriteria handles tokens above len(tokenizer) correctly.
test_stop_string_criteria_vocab_size_mismatch
python
huggingface/transformers
tests/generation/test_stopping_criteria.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_stopping_criteria.py
Apache-2.0
def _check_similar_generate_outputs(self, output_1, output_2, atol=1e-5, rtol=1e-5): """ Checks whether a pair of generate outputs are similar. Two `generate` call outputs are considered similar in the following situations: 1. The sequences are the same 2. The sequences are diffe...
Checks whether a pair of generate outputs are similar. Two `generate` call outputs are considered similar in the following situations: 1. The sequences are the same 2. The sequences are different, but the scores up to (and including) the first mismatch are nearly identical
_check_similar_generate_outputs
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_past_key_values_format(self, custom_all_cache_shapes=None): """ Test that the KV cache is formatted correctly. Exceptions need to explicitly overwrite this test, or pass the expected cache shapes. Having a standard KV cache format is important for a consistent API (and for advan...
Test that the KV cache is formatted correctly. Exceptions need to explicitly overwrite this test, or pass the expected cache shapes. Having a standard KV cache format is important for a consistent API (and for advanced generation methods).
test_past_key_values_format
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_from_inputs_embeds(self, _, num_beams): """Tests that we can generate from `inputs_embeds` instead of `input_ids` in LLMs, VLMs, etc""" # When supported, tests that the decoder model can generate from `inputs_embeds` instead of `input_ids` # if fails, you should probably update...
Tests that we can generate from `inputs_embeds` instead of `input_ids` in LLMs, VLMs, etc
test_generate_from_inputs_embeds
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_from_inputs_embeds_with_static_cache(self): """ Test that StaticCache can generate from inputs_embeds and calculates max_cache_length correctly in `generate()`. We force the model to not stop generation until max-length is reached to verify that the cache length is inde...
Test that StaticCache can generate from inputs_embeds and calculates max_cache_length correctly in `generate()`. We force the model to not stop generation until max-length is reached to verify that the cache length is indeed set correctly and we don't run out of index when slicing the cache. ...
test_generate_from_inputs_embeds_with_static_cache
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_continue_from_inputs_embeds(self): """Tests that we can continue generation from `inputs_embeds` and past key values returned from a previous `generate` call.""" for model_class in self.all_generative_model_classes: if any(model_name in model_class.__name__.lower() for mode...
Tests that we can continue generation from `inputs_embeds` and past key values returned from a previous `generate` call.
test_generate_continue_from_inputs_embeds
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_with_static_cache(self): """ Tests that generating with static cache give almost same results as with dynamic cache, and the output cache has the expected shapes """ set_model_tester_for_less_flaky_test(self) for model_class in self.all_generative_model_...
Tests that generating with static cache give almost same results as with dynamic cache, and the output cache has the expected shapes
test_generate_with_static_cache
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_compilation_all_outputs(self): """ Tests that all optional outputs are behaving as expected when compilation is triggered. In essence, it's the same as `test_greedy_generate_dict_outputs`, but with automatic compilation triggered. """ for model_class in self.all...
Tests that all optional outputs are behaving as expected when compilation is triggered. In essence, it's the same as `test_greedy_generate_dict_outputs`, but with automatic compilation triggered.
test_generate_compilation_all_outputs
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_inherits_generation_mixin(self): """ Tests that the model class directly inherits `GenerationMixin`, as opposed to relying on `PreTrainedModel` to inherit it. """ for model_class in self.all_generative_model_classes: self.assertTrue("GenerationMixin" in str(m...
Tests that the model class directly inherits `GenerationMixin`, as opposed to relying on `PreTrainedModel` to inherit it.
test_inherits_generation_mixin
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def _test_attention_implementation(self, attn_implementation): """ Compares the output of generate with the eager attention implementation against other implementations. NOTE: despite the test logic being the same, different implementations actually need different decorators, hence this ...
Compares the output of generate with the eager attention implementation against other implementations. NOTE: despite the test logic being the same, different implementations actually need different decorators, hence this separate function.
_test_attention_implementation
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_speculative_sampling_target_distribution(self): """ Asserts that the target distribution is preserved. Should help with catching issues like #32867. """ # assume vocab size 10, input length 5 + 3 generated candidates candidate_input_ids = torch.tensor([[8, 0, 3, ...
Asserts that the target distribution is preserved. Should help with catching issues like #32867.
test_speculative_sampling_target_distribution
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_with_static_cache_multi_accelerator(self): """ Tests if the static cache has been set correctly and if generate works correctly when we are using multi-acceleratorss. """ # need to split manually as auto doesn't work well with unbalanced model device_map = {"mod...
Tests if the static cache has been set correctly and if generate works correctly when we are using multi-acceleratorss.
test_generate_with_static_cache_multi_accelerator
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_multi_accelerator_causal_mask(self): """ Tests that cache position device doesn't clash with causal mask device when we are using multi-accelerators. In real life happens only when multimodal encoder size is big, so `embed_tokens` gets allocated to the next device. The ...
Tests that cache position device doesn't clash with causal mask device when we are using multi-accelerators. In real life happens only when multimodal encoder size is big, so `embed_tokens` gets allocated to the next device. The error will be triggered whenever a bacthed input is used, so that ...
test_generate_multi_accelerator_causal_mask
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_init_static_cache_multi_accelerator(self): """ Tests if the static cache has been set correctly when we initialize it manually in a multi-accelerator setup. """ # need to split manually as auto doesn't work well with unbalanced model device_map = {"model.embed_tokens": 0...
Tests if the static cache has been set correctly when we initialize it manually in a multi-accelerator setup.
test_init_static_cache_multi_accelerator
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0