code
stringlengths
66
870k
docstring
stringlengths
19
26.7k
func_name
stringlengths
1
138
language
stringclasses
1 value
repo
stringlengths
7
68
path
stringlengths
5
324
url
stringlengths
46
389
license
stringclasses
7 values
def test_prepare_inputs_for_generation_decoder_llm(self): """Tests GenerationMixin.prepare_inputs_for_generation against expected usage with decoder-only llms.""" config = AutoConfig.from_pretrained("hf-internal-testing/tiny-random-LlamaForCausalLM") model = AutoModelForCausalLM.from_pretrained...
Tests GenerationMixin.prepare_inputs_for_generation against expected usage with decoder-only llms.
test_prepare_inputs_for_generation_decoder_llm
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_prepare_inputs_for_generation_encoder_decoder_llm(self): """ Same as `test_prepare_inputs_for_generation_decoder_llm` but for encoder-decoder models. Main difference: we should look for `decoder_input_ids`, instead of `input_ids`. """ model = AutoModelForSeq2SeqLM.from_p...
Same as `test_prepare_inputs_for_generation_decoder_llm` but for encoder-decoder models. Main difference: we should look for `decoder_input_ids`, instead of `input_ids`.
test_prepare_inputs_for_generation_encoder_decoder_llm
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_compile_fullgraph_tiny(self): """ Tests that we can call end-to-end generation with a tiny model (i.e. doesn't crash) NOTE: this test is quite slow (~20s on a consumer desktop), but it is important that we keep it as part of the non-slow tests to prevent regressions! ...
Tests that we can call end-to-end generation with a tiny model (i.e. doesn't crash) NOTE: this test is quite slow (~20s on a consumer desktop), but it is important that we keep it as part of the non-slow tests to prevent regressions!
test_generate_compile_fullgraph_tiny
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_assisted_generation_early_exit(self): """ Tests that assisted generation with early exit works as expected. Under the hood, this has complex cache manipulation, which will cause the test to fail if something goes wrong there. """ expected_output = "Alice and Bob are play...
Tests that assisted generation with early exit works as expected. Under the hood, this has complex cache manipulation, which will cause the test to fail if something goes wrong there.
test_assisted_generation_early_exit
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_beam_search_advanced_stopping_criteria(self): """ Tests that beam search works with a stopping criteria that is not max length or EOS token. Prior to the beam search vectorization PR (#35802), beam search was not accepting other stopping criteria. Test inspired on the original i...
Tests that beam search works with a stopping criteria that is not max length or EOS token. Prior to the beam search vectorization PR (#35802), beam search was not accepting other stopping criteria. Test inspired on the original issue (#34843).
test_beam_search_advanced_stopping_criteria
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_validate_generation_inputs(self): """Tests validation of inputs to `generate`""" tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-t5") model = AutoModelForSeq2SeqLM.from_pretrained("hf-internal-testing/tiny-random-t5") encoder_input_str = "Hello world"...
Tests validation of inputs to `generate`
test_validate_generation_inputs
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_custom_logits_processor(self): """Tests that custom logits processors can be used in `generate`, and that redundant arguments are caught.""" bart_tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-bart") article = """Justin Timberlake and Jessica Biel, welcome to...
Tests that custom logits processors can be used in `generate`, and that redundant arguments are caught.
test_custom_logits_processor
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_transition_scores_greedy_search(self): """Test that `compute_transition_scores` is working as expected with gready search""" articles = ["Justin Timberlake", "Michael Phelps"] tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2", padding_side="left") tokenizer.pad_t...
Test that `compute_transition_scores` is working as expected with gready search
test_transition_scores_greedy_search
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_transition_scores_greedy_search_normalized(self): """ Test that `compute_transition_scores` is working as expected with gready search, with `normalize_logits=True` """ articles = ["Justin Timberlake", "Michael Phelps"] tokenizer = AutoTokenizer.from_pretrained("distilber...
Test that `compute_transition_scores` is working as expected with gready search, with `normalize_logits=True`
test_transition_scores_greedy_search_normalized
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_transition_scores_beam_search_encoder_decoder(self): """ Test that `compute_transition_scores` is working as expected with beam search and encoder-decoder models """ articles = [ "Justin Timberlake and Jessica Biel, welcome to parenthood.", "Michael Phelp...
Test that `compute_transition_scores` is working as expected with beam search and encoder-decoder models
test_transition_scores_beam_search_encoder_decoder
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_transition_scores_beam_search_encoder_decoder_with_eos(self): """ Test that `compute_transition_scores` is working as expected with beam search and encoder-decoder models, when an EOS token is defined """ articles = [ "Justin Timberlake and Jessica Biel, welc...
Test that `compute_transition_scores` is working as expected with beam search and encoder-decoder models, when an EOS token is defined
test_transition_scores_beam_search_encoder_decoder_with_eos
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_transition_scores_beam_search_decoder_only(self): """ Test that `compute_transition_scores` is working as expected with beam search and decoder-only models """ articles = [ "Justin Timberlake", "Michael Phelps", ] tokenizer = AutoTokenizer...
Test that `compute_transition_scores` is working as expected with beam search and decoder-only models
test_transition_scores_beam_search_decoder_only
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_transition_scores_early_stopping(self): """ Test that `compute_transition_scores` is working as expected with beam search and early stopping This is an aggressive test that makes sure that `beam_search's` transition scores are computed correctly for varying `num_return_sequence...
Test that `compute_transition_scores` is working as expected with beam search and early stopping This is an aggressive test that makes sure that `beam_search's` transition scores are computed correctly for varying `num_return_sequences`, `num_beams` and `batch_size > 1` 2 x input_ids f...
test_transition_scores_early_stopping
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_encoder_decoder_generate_attention_mask(self): """ Test that `generate` automagically creates the correct `attention_mask` for encoder-decoder models (which has a different keyword) """ articles = ["Timberlake", "Jessica Biel, welcome to parenthood among other things"] ...
Test that `generate` automagically creates the correct `attention_mask` for encoder-decoder models (which has a different keyword)
test_encoder_decoder_generate_attention_mask
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_input_ids_as_kwarg(self): """Test that `input_ids` work equally as a positional and keyword argument in decoder-only models""" article = "I need input_ids to generate" tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-gpt2") model = AutoModelFor...
Test that `input_ids` work equally as a positional and keyword argument in decoder-only models
test_generate_input_ids_as_kwarg
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_input_ids_as_encoder_kwarg(self): """Test that `input_ids` work equally as a positional and keyword argument in encoder-decoder models""" article = "Justin Timberlake and Jessica Biel, welcome to parenthood." tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-r...
Test that `input_ids` work equally as a positional and keyword argument in encoder-decoder models
test_generate_input_ids_as_encoder_kwarg
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_inputs_and_encoder_kwargs(self): """ Test that an exception is thrown if the main tensor (`input_ids` in LLMs) is passed as both a positional and keyword argument """ article = "I need input_ids to generate" tokenizer = AutoTokenizer.from_pretrained("hf-...
Test that an exception is thrown if the main tensor (`input_ids` in LLMs) is passed as both a positional and keyword argument
test_generate_inputs_and_encoder_kwargs
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_too_many_encoder_kwargs(self): """Test that passing redundant inputs results in an exception (`input_ids` and `inputs_embeds` in LLMs)""" article = "I need input_ids to generate" tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-bart") model = A...
Test that passing redundant inputs results in an exception (`input_ids` and `inputs_embeds` in LLMs)
test_generate_too_many_encoder_kwargs
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_input_features_as_encoder_kwarg(self): """Test that non-`input_ids` main model inputs are correctly handled as positional arguments""" input_features = floats_tensor((3, 80, 60)) model = AutoModelForSpeechSeq2Seq.from_pretrained( "hf-internal-testing/tiny-random-Whi...
Test that non-`input_ids` main model inputs are correctly handled as positional arguments
test_generate_input_features_as_encoder_kwarg
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_encoder_outputs_attention_mask(self): """Test that `generate` can handle attention masks when the encoder outputs are passed""" input_features = floats_tensor((3, 80, 60)) attention_mask = torch.randint(0, 2, input_features.shape).to(torch_device) model = AutoModelForSp...
Test that `generate` can handle attention masks when the encoder outputs are passed
test_generate_encoder_outputs_attention_mask
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_eos_token_id_int_and_list_greedy_search(self): """Test that `generate` can handle multiple EOS tokens""" generation_kwargs = { "do_sample": False, "num_beams": 1, } expectation = 13 tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/t...
Test that `generate` can handle multiple EOS tokens
test_eos_token_id_int_and_list_greedy_search
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_generate_vision2text_conditioning(self): """Test that `decoder_input_ids` can be used to condition the generation in vision-to-text models""" pixel_values = floats_tensor((2, 3, 30, 30)) conditioning_input = torch.tensor([[10], [10]]) # this should be the 2nd output token, after the BO...
Test that `decoder_input_ids` can be used to condition the generation in vision-to-text models
test_generate_vision2text_conditioning
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_cache_device_map_with_vision_layer_device_map(self): """ Test that the cache device map is correctly set when the vision layer has a device map. Regression test for #36942 """ # gemma 3 uses hybrid cache, which can be compiled -> needs a device map at allocation time ...
Test that the cache device map is correctly set when the vision layer has a device map. Regression test for #36942
test_cache_device_map_with_vision_layer_device_map
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_cpu_offload_doesnt_compile(self): """Test that CPU offload doesn't trigger compilation""" tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-MistralForCausalLM") tokenized_inputs = tokenizer(["Hello world"], return_tensors="pt") generate_kwargs = {"max_ne...
Test that CPU offload doesn't trigger compilation
test_cpu_offload_doesnt_compile
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_custom_generate_from_argument_in_generate(self): """Tests that the `custom_generate` argument is used when passed to `generate`""" model = AutoModelForCausalLM.from_pretrained( "hf-internal-testing/tiny-random-MistralForCausalLM", device_map="auto" ) tokenizer = Auto...
Tests that the `custom_generate` argument is used when passed to `generate`
test_custom_generate_from_argument_in_generate
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_custom_generate_from_model_repo_with_custom_generate_code(self): """ Tests that models from model repos containing custom generation code override `generate` with the custom code """ model = AutoModelForCausalLM.from_pretrained( "transformers-community/custom_generat...
Tests that models from model repos containing custom generation code override `generate` with the custom code
test_custom_generate_from_model_repo_with_custom_generate_code
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_custom_generate_bad_requirements(self): """Tests that we check the `requirements.txt` file from custom generation repos""" model = AutoModelForCausalLM.from_pretrained( "hf-internal-testing/tiny-random-MistralForCausalLM", device_map="auto" ) tokenizer = AutoTokenize...
Tests that we check the `requirements.txt` file from custom generation repos
test_custom_generate_bad_requirements
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def test_custom_generate_requires_trust_remote_code(self): """Tests that `trust_remote_code` is required when using `custom_generate`""" # Case 1: A model from a repo containing custom generation code must be loaded with `trust_remote_code` with self.assertRaises(ValueError): AutoMod...
Tests that `trust_remote_code` is required when using `custom_generate`
test_custom_generate_requires_trust_remote_code
python
huggingface/transformers
tests/generation/test_utils.py
https://github.com/huggingface/transformers/blob/master/tests/generation/test_utils.py
Apache-2.0
def prepare_image_inputs( self, batch_size=None, min_resolution=None, max_resolution=None, num_channels=None, num_images=None, size_divisor=None, equal_resolution=False, numpify=False, torchify=False, ): """This function prepare...
This function prepares a list of PIL images, or a list of numpy arrays if one specifies numpify=True, or a list of PyTorch tensors if one specifies torchify=True. One can specify whether the images are of the same resolution or not.
prepare_image_inputs
python
huggingface/transformers
tests/models/aria/test_image_processing_aria.py
https://github.com/huggingface/transformers/blob/master/tests/models/aria/test_image_processing_aria.py
Apache-2.0
def test_special_mm_token_truncation(self): """Tests that special vision tokens do not get truncated when `truncation=True` is set.""" processor = self.get_processor() input_str = self.prepare_text_inputs(batch_size=2, modality="image") image_input = self.prepare_image_inputs(batch_siz...
Tests that special vision tokens do not get truncated when `truncation=True` is set.
test_special_mm_token_truncation
python
huggingface/transformers
tests/models/aria/test_processor_aria.py
https://github.com/huggingface/transformers/blob/master/tests/models/aria/test_processor_aria.py
Apache-2.0
def test_custom_model_patched_generation_inheritance(self): """ Tests that our inheritance patching for generate-compatible models works as expected. Without this feature, old Hub models lose the ability to call `generate`. """ model = AutoModelForCausalLM.from_pretrained( ...
Tests that our inheritance patching for generate-compatible models works as expected. Without this feature, old Hub models lose the ability to call `generate`.
test_custom_model_patched_generation_inheritance
python
huggingface/transformers
tests/models/auto/test_modeling_auto.py
https://github.com/huggingface/transformers/blob/master/tests/models/auto/test_modeling_auto.py
Apache-2.0
def _update_layer_configs(self): """Configures hidden layers and attn layer indices if they are not set.""" # Fix for SDPA tests, force at least 4 layers if self.num_hidden_layers < 4: self.num_hidden_layers = 4 if self.attn_layer_indices is None: d = [x for x in...
Configures hidden layers and attn layer indices if they are not set.
_update_layer_configs
python
huggingface/transformers
tests/models/bamba/test_modeling_bamba.py
https://github.com/huggingface/transformers/blob/master/tests/models/bamba/test_modeling_bamba.py
Apache-2.0
def test_initialization(self): r""" Overriding the test_initialization test as the A_log and D params of the Bamba mixer are initialized differently """ config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() configs_no_init = _config_zero_init(config) ...
Overriding the test_initialization test as the A_log and D params of the Bamba mixer are initialized differently
test_initialization
python
huggingface/transformers
tests/models/bamba/test_modeling_bamba.py
https://github.com/huggingface/transformers/blob/master/tests/models/bamba/test_modeling_bamba.py
Apache-2.0
def test_attention_outputs(self): r""" Overriding the test_attention_outputs test as the Bamba model outputs attention only for its attention layers """ config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True seq_len = get...
Overriding the test_attention_outputs test as the Bamba model outputs attention only for its attention layers
test_attention_outputs
python
huggingface/transformers
tests/models/bamba/test_modeling_bamba.py
https://github.com/huggingface/transformers/blob/master/tests/models/bamba/test_modeling_bamba.py
Apache-2.0
def assert_tensors_close(a, b, atol=1e-12, prefix=""): """If tensors have different shapes, different values or a and b are not both tensors, raise a nice Assertion error.""" if a is None and b is None: return True try: if torch.allclose(a, b, atol=atol): return True rais...
If tensors have different shapes, different values or a and b are not both tensors, raise a nice Assertion error.
assert_tensors_close
python
huggingface/transformers
tests/models/bart/test_modeling_bart.py
https://github.com/huggingface/transformers/blob/master/tests/models/bart/test_modeling_bart.py
Apache-2.0
def test_block_sparse_attention_probs(self): """ Asserting if outputted attention matrix is similar to hard coded attention matrix """ if not self.test_attention_probs: self.skipTest("test_attention_probs is set to False") model = BigBirdModel.from_pretrained( ...
Asserting if outputted attention matrix is similar to hard coded attention matrix
test_block_sparse_attention_probs
python
huggingface/transformers
tests/models/big_bird/test_modeling_big_bird.py
https://github.com/huggingface/transformers/blob/master/tests/models/big_bird/test_modeling_big_bird.py
Apache-2.0
def test_special_tokens(self): """ To reproduce: $ wget https://github.com/google-research/bigbird/blob/master/bigbird/vocab/gpt2.model?raw=true $ mv gpt2.model?raw=true gpt2.model ``` import tensorflow_text as tft import tensorflow as tf vocab_model_fi...
To reproduce: $ wget https://github.com/google-research/bigbird/blob/master/bigbird/vocab/gpt2.model?raw=true $ mv gpt2.model?raw=true gpt2.model ``` import tensorflow_text as tft import tensorflow as tf vocab_model_file = "./gpt2.model" tokenizer = tf...
test_special_tokens
python
huggingface/transformers
tests/models/big_bird/test_tokenization_big_bird.py
https://github.com/huggingface/transformers/blob/master/tests/models/big_bird/test_tokenization_big_bird.py
Apache-2.0
def test_full_tokenizer(self): """Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt""" tokenizer = BioGptTokenizer(self.vocab_file, self.merges_file) text = "lower" bpe_tokens = ["low", "er</w>"] tokens = tokenizer.tokenize(text) self.assertL...
Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt
test_full_tokenizer
python
huggingface/transformers
tests/models/biogpt/test_tokenization_biogpt.py
https://github.com/huggingface/transformers/blob/master/tests/models/biogpt/test_tokenization_biogpt.py
Apache-2.0
def assert_tensors_close(a, b, atol=1e-12, prefix=""): """If tensors have different shapes, different values or a and b are not both tensors, raise a nice Assertion error.""" if a is None and b is None: return True try: if torch.allclose(a, b, atol=atol): return True rais...
If tensors have different shapes, different values or a and b are not both tensors, raise a nice Assertion error.
assert_tensors_close
python
huggingface/transformers
tests/models/blenderbot/test_modeling_blenderbot.py
https://github.com/huggingface/transformers/blob/master/tests/models/blenderbot/test_modeling_blenderbot.py
Apache-2.0
def assert_tensors_close(a, b, atol=1e-12, prefix=""): """If tensors have different shapes, different values or a and b are not both tensors, raise a nice Assertion error.""" if a is None and b is None: return True try: if torch.allclose(a, b, atol=atol): return True rais...
If tensors have different shapes, different values or a and b are not both tensors, raise a nice Assertion error.
assert_tensors_close
python
huggingface/transformers
tests/models/blenderbot_small/test_modeling_blenderbot_small.py
https://github.com/huggingface/transformers/blob/master/tests/models/blenderbot_small/test_modeling_blenderbot_small.py
Apache-2.0
def test_class_name_consistency(self): """ Tests that all VQA models have a class name that ends with "ForQuestionAnswering" """ for model_class in self.all_model_classes: model = model_class(self.model_tester.get_config()) self.assertTrue( model._...
Tests that all VQA models have a class name that ends with "ForQuestionAnswering"
test_class_name_consistency
python
huggingface/transformers
tests/models/blip/test_modeling_blip.py
https://github.com/huggingface/transformers/blob/master/tests/models/blip/test_modeling_blip.py
Apache-2.0
def test_training(self): """ Tests that all VQA models can be trained on a single batch """ for model_class in self.all_model_classes: model = model_class(self.model_tester.get_config()).to(torch_device) model.train() loss = model(**self.model_tester.p...
Tests that all VQA models can be trained on a single batch
test_training
python
huggingface/transformers
tests/models/blip/test_modeling_blip.py
https://github.com/huggingface/transformers/blob/master/tests/models/blip/test_modeling_blip.py
Apache-2.0
def test_forward_signature(self): """ Test if the forward function has the expected arguments. """ for model_class in self.all_model_classes: model = model_class(self.model_tester.get_config()) signature = inspect.signature(model.forward) # signature.p...
Test if the forward function has the expected arguments.
test_forward_signature
python
huggingface/transformers
tests/models/blip/test_modeling_blip.py
https://github.com/huggingface/transformers/blob/master/tests/models/blip/test_modeling_blip.py
Apache-2.0
def test_class_name_consistency(self): """ Tests that all VQA models have a class name that ends with "ForQuestionAnswering" """ for model_class in self.all_model_classes: model = model_class(self.model_tester.get_config()) self.assertTrue( model._...
Tests that all VQA models have a class name that ends with "ForQuestionAnswering"
test_class_name_consistency
python
huggingface/transformers
tests/models/blip/test_modeling_tf_blip.py
https://github.com/huggingface/transformers/blob/master/tests/models/blip/test_modeling_tf_blip.py
Apache-2.0
def test_training(self): """ Tests that all VQA models can be trained on a single batch """ for model_class in self.all_model_classes: model = model_class(self.model_tester.get_config()) loss = model(**self.model_tester.prepare_config_and_inputs_for_common()[1], t...
Tests that all VQA models can be trained on a single batch
test_training
python
huggingface/transformers
tests/models/blip/test_modeling_tf_blip.py
https://github.com/huggingface/transformers/blob/master/tests/models/blip/test_modeling_tf_blip.py
Apache-2.0
def test_sdpa_can_dispatch_composite_models(self): """ Tests if composite models dispatch correctly on SDPA/eager when requested so when loading the model. This tests only by looking at layer names, as usually SDPA layers are called "SDPAAttention". In contrast to the above test, this on...
Tests if composite models dispatch correctly on SDPA/eager when requested so when loading the model. This tests only by looking at layer names, as usually SDPA layers are called "SDPAAttention". In contrast to the above test, this one checks if the "config._attn_implamentation" is a dict after ...
test_sdpa_can_dispatch_composite_models
python
huggingface/transformers
tests/models/blip_2/test_modeling_blip_2.py
https://github.com/huggingface/transformers/blob/master/tests/models/blip_2/test_modeling_blip_2.py
Apache-2.0
def test_sdpa_can_dispatch_composite_models(self): """ Tests if composite models dispatch correctly on SDPA/eager when requested so when loading the model. This tests only by looking at layer names, as usually SDPA layers are called "SDPAAttention". In contrast to the above test, this on...
Tests if composite models dispatch correctly on SDPA/eager when requested so when loading the model. This tests only by looking at layer names, as usually SDPA layers are called "SDPAAttention". In contrast to the above test, this one checks if the "config._attn_implamentation" is a dict after ...
test_sdpa_can_dispatch_composite_models
python
huggingface/transformers
tests/models/blip_2/test_modeling_blip_2.py
https://github.com/huggingface/transformers/blob/master/tests/models/blip_2/test_modeling_blip_2.py
Apache-2.0
def test_encodings_from_sample_data(self): """ Assert that the created tokens are the same than the hard-coded ones """ tokenizer = self.get_rust_tokenizer() INPUT_SENTENCES = ["The quick brown fox</s>", "jumps over the lazy dog</s>"] TARGET_TOKENS = [[2175, 23714, 73173...
Assert that the created tokens are the same than the hard-coded ones
test_encodings_from_sample_data
python
huggingface/transformers
tests/models/bloom/test_tokenization_bloom.py
https://github.com/huggingface/transformers/blob/master/tests/models/bloom/test_tokenization_bloom.py
Apache-2.0
def test_encodings_from_xnli_dataset(self): """ Tests the tokenizer downloaded from here: - https://huggingface.co/bigscience/tokenizer/ """ tokenizer = self.get_rust_tokenizer() ds = load_dataset("facebook/xnli", "all_languages", split="test", streaming=True) ...
Tests the tokenizer downloaded from here: - https://huggingface.co/bigscience/tokenizer/
test_encodings_from_xnli_dataset
python
huggingface/transformers
tests/models/bloom/test_tokenization_bloom.py
https://github.com/huggingface/transformers/blob/master/tests/models/bloom/test_tokenization_bloom.py
Apache-2.0
def test_mismatching_num_image_tokens(self): """ Tests that VLMs through an error with explicit message saying what is wrong when number of images don't match number of image tokens in the text. Also we need to test multi-image cases when one prompr has multiple image tokens. """...
Tests that VLMs through an error with explicit message saying what is wrong when number of images don't match number of image tokens in the text. Also we need to test multi-image cases when one prompr has multiple image tokens.
test_mismatching_num_image_tokens
python
huggingface/transformers
tests/models/chameleon/test_modeling_chameleon.py
https://github.com/huggingface/transformers/blob/master/tests/models/chameleon/test_modeling_chameleon.py
Apache-2.0
def test_special_mm_token_truncation(self): """Tests that special vision tokens do not get truncated when `truncation=True` is set.""" processor = self.get_processor() input_str = self.prepare_text_inputs(batch_size=2, modality="image") image_input = self.prepare_image_inputs(batch_siz...
Tests that special vision tokens do not get truncated when `truncation=True` is set.
test_special_mm_token_truncation
python
huggingface/transformers
tests/models/chameleon/test_processor_chameleon.py
https://github.com/huggingface/transformers/blob/master/tests/models/chameleon/test_processor_chameleon.py
Apache-2.0
def test_encodings_from_sample_data(self): """ Assert that the created tokens are the same than the hard-coded ones """ tokenizer = self.get_rust_tokenizer() INPUT_SENTENCES = ["The quick brown fox<|END_OF_TURN_TOKEN|>", "jumps over the lazy dog<|END_OF_TURN_TOKEN|>"] TA...
Assert that the created tokens are the same than the hard-coded ones
test_encodings_from_sample_data
python
huggingface/transformers
tests/models/cohere/test_tokenization_cohere.py
https://github.com/huggingface/transformers/blob/master/tests/models/cohere/test_tokenization_cohere.py
Apache-2.0
def test_generation_beyond_sliding_window(self, attn_implementation: str): """Test that we can correctly generate beyond the sliding window. This is non trivial as we need to correctly slice the attention mask in all cases (because we use a HybridCache). Outputs for every attention functions sho...
Test that we can correctly generate beyond the sliding window. This is non trivial as we need to correctly slice the attention mask in all cases (because we use a HybridCache). Outputs for every attention functions should be coherent and identical.
test_generation_beyond_sliding_window
python
huggingface/transformers
tests/models/cohere2/test_modeling_cohere2.py
https://github.com/huggingface/transformers/blob/master/tests/models/cohere2/test_modeling_cohere2.py
Apache-2.0
def test_model_integration_test(self): """ Test if the model is able to retrieve the correct pages for a small and easy dataset. """ model = ColPaliForRetrieval.from_pretrained( self.model_name, torch_dtype=torch.bfloat16, device_map=torch_device, ...
Test if the model is able to retrieve the correct pages for a small and easy dataset.
test_model_integration_test
python
huggingface/transformers
tests/models/colpali/test_modeling_colpali.py
https://github.com/huggingface/transformers/blob/master/tests/models/colpali/test_modeling_colpali.py
Apache-2.0
def test_image_processor_defaults_preserved_by_image_kwargs(self): """ We use do_rescale=True, rescale_factor=-1 to ensure that image_processor kwargs are preserved in the processor. We then check that the mean of the pixel_values is less than or equal to 0 after processing. Since the or...
We use do_rescale=True, rescale_factor=-1 to ensure that image_processor kwargs are preserved in the processor. We then check that the mean of the pixel_values is less than or equal to 0 after processing. Since the original pixel_values are in [0, 255], this is a good indicator that the rescale...
test_image_processor_defaults_preserved_by_image_kwargs
python
huggingface/transformers
tests/models/colpali/test_processing_colpali.py
https://github.com/huggingface/transformers/blob/master/tests/models/colpali/test_processing_colpali.py
Apache-2.0
def test_model_integration_test(self): """ Test if the model is able to retrieve the correct pages for a small and easy dataset. """ model = ColQwen2ForRetrieval.from_pretrained( self.model_name, torch_dtype=torch.float16, load_in_8bit=True, )....
Test if the model is able to retrieve the correct pages for a small and easy dataset.
test_model_integration_test
python
huggingface/transformers
tests/models/colqwen2/test_modeling_colqwen2.py
https://github.com/huggingface/transformers/blob/master/tests/models/colqwen2/test_modeling_colqwen2.py
Apache-2.0
def test_image_processor_defaults_preserved_by_image_kwargs(self): """ We use do_rescale=True, rescale_factor=-1 to ensure that image_processor kwargs are preserved in the processor. We then check that the mean of the pixel_values is less than or equal to 0 after processing. Since the or...
We use do_rescale=True, rescale_factor=-1 to ensure that image_processor kwargs are preserved in the processor. We then check that the mean of the pixel_values is less than or equal to 0 after processing. Since the original pixel_values are in [0, 255], this is a good indicator that the rescale...
test_image_processor_defaults_preserved_by_image_kwargs
python
huggingface/transformers
tests/models/colqwen2/test_processing_colqwen2.py
https://github.com/huggingface/transformers/blob/master/tests/models/colqwen2/test_processing_colqwen2.py
Apache-2.0
def get_expected_values(self, image_inputs, batched=False): """ This function computes the expected height and width when providing images to ConditionalDetrImageProcessor, assuming do_resize is set to True with a scalar size. """ if not batched: image = image_inputs[...
This function computes the expected height and width when providing images to ConditionalDetrImageProcessor, assuming do_resize is set to True with a scalar size.
get_expected_values
python
huggingface/transformers
tests/models/conditional_detr/test_image_processing_conditional_detr.py
https://github.com/huggingface/transformers/blob/master/tests/models/conditional_detr/test_image_processing_conditional_detr.py
Apache-2.0
def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): """ Overrides [ModelTesterMixin._prepare_for_class] to handle third input_ids dimension. """ inputs_dict = copy.deepcopy(inputs_dict) if return_labels: inputs_dict["labels"] = torch.zeros( ...
Overrides [ModelTesterMixin._prepare_for_class] to handle third input_ids dimension.
_prepare_for_class
python
huggingface/transformers
tests/models/csm/test_modeling_csm.py
https://github.com/huggingface/transformers/blob/master/tests/models/csm/test_modeling_csm.py
Apache-2.0
def _get_logits_processor_kwargs(self, do_sample=False, config=None): """ Overrides [GenerationTesterMixin._get_logits_processor_kwargs] to restrict to top_k, top_p, and temperature sampling. """ logits_processor_kwargs = {} if do_sample: logits_processor_kwargs.updat...
Overrides [GenerationTesterMixin._get_logits_processor_kwargs] to restrict to top_k, top_p, and temperature sampling.
_get_logits_processor_kwargs
python
huggingface/transformers
tests/models/csm/test_modeling_csm.py
https://github.com/huggingface/transformers/blob/master/tests/models/csm/test_modeling_csm.py
Apache-2.0
def test_initialization(self): """ Overrides [ModelTesterMixin.test_initialization] because of specificities of Mimi codec model. See https://github.com/huggingface/transformers/blob/1077603410cd73ba71d64a522033574d66d64b55/tests/models/mimi/test_modeling_mimi.py#L384-L397 """ co...
Overrides [ModelTesterMixin.test_initialization] because of specificities of Mimi codec model. See https://github.com/huggingface/transformers/blob/1077603410cd73ba71d64a522033574d66d64b55/tests/models/mimi/test_modeling_mimi.py#L384-L397
test_initialization
python
huggingface/transformers
tests/models/csm/test_modeling_csm.py
https://github.com/huggingface/transformers/blob/master/tests/models/csm/test_modeling_csm.py
Apache-2.0
def _check_similar_generate_outputs(self, output_1, output_2, atol=1e-5, rtol=1e-5): """ Overrides [GenerationTesterMixin._check_similar_generate_outputs] to handle third input_ids dimension. Here we only look a the first codebook (index 0 on last dimension of the generated sequences) since retu...
Overrides [GenerationTesterMixin._check_similar_generate_outputs] to handle third input_ids dimension. Here we only look a the first codebook (index 0 on last dimension of the generated sequences) since returned scores are for this token.
_check_similar_generate_outputs
python
huggingface/transformers
tests/models/csm/test_modeling_csm.py
https://github.com/huggingface/transformers/blob/master/tests/models/csm/test_modeling_csm.py
Apache-2.0
def test_tied_weights_keys(self): """ Overrides [ModelTesterMixin.test_tied_weights_keys] to not test for text config (not applicable to CSM). """ config, _ = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model_tie...
Overrides [ModelTesterMixin.test_tied_weights_keys] to not test for text config (not applicable to CSM).
test_tied_weights_keys
python
huggingface/transformers
tests/models/csm/test_modeling_csm.py
https://github.com/huggingface/transformers/blob/master/tests/models/csm/test_modeling_csm.py
Apache-2.0
def _get_custom_4d_mask_test_data(self): """ Overrides [ModelTesterMixin._get_custom_4d_mask_test_data] to handle third input_ids dimension. """ # Sequence in which all but the last token is the same input_ids = torch.tensor([[0, 1, 2, 3], [0, 1, 2, 4], [0, 1, 2, 5]], device=torc...
Overrides [ModelTesterMixin._get_custom_4d_mask_test_data] to handle third input_ids dimension.
_get_custom_4d_mask_test_data
python
huggingface/transformers
tests/models/csm/test_modeling_csm.py
https://github.com/huggingface/transformers/blob/master/tests/models/csm/test_modeling_csm.py
Apache-2.0
def test_1b_model_integration_generate(self): """ Tests the generated tokens match the ones from the original model implementation. Such tokens are to be retreived using https://gist.github.com/eustlb/d25577a357ddcf8f4a8cd0d00baca551, which is a script that infers the original model. """...
Tests the generated tokens match the ones from the original model implementation. Such tokens are to be retreived using https://gist.github.com/eustlb/d25577a357ddcf8f4a8cd0d00baca551, which is a script that infers the original model.
test_1b_model_integration_generate
python
huggingface/transformers
tests/models/csm/test_modeling_csm.py
https://github.com/huggingface/transformers/blob/master/tests/models/csm/test_modeling_csm.py
Apache-2.0
def test_1b_model_integration_generate_no_audio(self): """ Tests the generated tokens match the ones from the original model implementation. Such tokens are to be retreived using https://gist.github.com/eustlb/aed822f765e928b9612e01b0d8836d69, which is a script that infers the original model. ...
Tests the generated tokens match the ones from the original model implementation. Such tokens are to be retreived using https://gist.github.com/eustlb/aed822f765e928b9612e01b0d8836d69, which is a script that infers the original model.
test_1b_model_integration_generate_no_audio
python
huggingface/transformers
tests/models/csm/test_modeling_csm.py
https://github.com/huggingface/transformers/blob/master/tests/models/csm/test_modeling_csm.py
Apache-2.0
def test_1b_model_integration_generate_multiple_audio(self): """ Test the generated tokens match the ones from the original model implementation. Such tokens are to be retreived using https://gist.github.com/eustlb/0c94de002e1325abb61d32217f74c0f8, which is a script that infers the original mode...
Test the generated tokens match the ones from the original model implementation. Such tokens are to be retreived using https://gist.github.com/eustlb/0c94de002e1325abb61d32217f74c0f8, which is a script that infers the original model.
test_1b_model_integration_generate_multiple_audio
python
huggingface/transformers
tests/models/csm/test_modeling_csm.py
https://github.com/huggingface/transformers/blob/master/tests/models/csm/test_modeling_csm.py
Apache-2.0
def test_1b_model_integration_generate_batched(self): """ Test the generated tokens match the ones from the original model implementation. Such tokens are to be retreived using https://gist.github.com/eustlb/bcc532b53161bc31da3d66cb07ae193f, which is a script that infers the original model. ...
Test the generated tokens match the ones from the original model implementation. Such tokens are to be retreived using https://gist.github.com/eustlb/bcc532b53161bc31da3d66cb07ae193f, which is a script that infers the original model.
test_1b_model_integration_generate_batched
python
huggingface/transformers
tests/models/csm/test_modeling_csm.py
https://github.com/huggingface/transformers/blob/master/tests/models/csm/test_modeling_csm.py
Apache-2.0
def test_batching_equivalence(self): """ Tests that the model supports batching and that the output is the nearly the same for the same input in different batch sizes. (Why "nearly the same" not "exactly the same"? Batching uses different matmul shapes, which often leads to diffe...
Tests that the model supports batching and that the output is the nearly the same for the same input in different batch sizes. (Why "nearly the same" not "exactly the same"? Batching uses different matmul shapes, which often leads to different results: https://github.com/huggingface/tra...
test_batching_equivalence
python
huggingface/transformers
tests/models/dab_detr/test_modeling_dab_detr.py
https://github.com/huggingface/transformers/blob/master/tests/models/dab_detr/test_modeling_dab_detr.py
Apache-2.0
def test_create_position_ids_respects_padding_index(self): """This is a regression test for https://github.com/huggingface/transformers/issues/1761 The position ids should be masked with the embedding object's padding index. Therefore, the first available non-padding position index is Data2VecT...
This is a regression test for https://github.com/huggingface/transformers/issues/1761 The position ids should be masked with the embedding object's padding index. Therefore, the first available non-padding position index is Data2VecTextForTextEmbeddings.padding_idx + 1
test_create_position_ids_respects_padding_index
python
huggingface/transformers
tests/models/data2vec/test_modeling_data2vec_text.py
https://github.com/huggingface/transformers/blob/master/tests/models/data2vec/test_modeling_data2vec_text.py
Apache-2.0
def test_create_position_ids_from_inputs_embeds(self): """This is a regression test for https://github.com/huggingface/transformers/issues/1761 The position ids should be masked with the embedding object's padding index. Therefore, the first available non-padding position index is Data2VecTextF...
This is a regression test for https://github.com/huggingface/transformers/issues/1761 The position ids should be masked with the embedding object's padding index. Therefore, the first available non-padding position index is Data2VecTextForTextEmbeddings.padding_idx + 1
test_create_position_ids_from_inputs_embeds
python
huggingface/transformers
tests/models/data2vec/test_modeling_data2vec_text.py
https://github.com/huggingface/transformers/blob/master/tests/models/data2vec/test_modeling_data2vec_text.py
Apache-2.0
def test_autoregressive_prediction(self): """ An integration test that performs autoregressive prediction of state, action and return from a sequence of state, actions and returns. Test is performed over two timesteps. """ NUM_STEPS = 2 # number of steps of autoregressive pred...
An integration test that performs autoregressive prediction of state, action and return from a sequence of state, actions and returns. Test is performed over two timesteps.
test_autoregressive_prediction
python
huggingface/transformers
tests/models/decision_transformer/test_modeling_decision_transformer.py
https://github.com/huggingface/transformers/blob/master/tests/models/decision_transformer/test_modeling_decision_transformer.py
Apache-2.0
def test_past_key_values_format(self): """ Overwriting to pass the expected cache shapes (Deepseek-V3 uses MLA so the cache shapes are non-standard) """ config, inputs = self.model_tester.prepare_config_and_inputs_for_common() batch_size, seq_length = inputs["input_ids"].shape ...
Overwriting to pass the expected cache shapes (Deepseek-V3 uses MLA so the cache shapes are non-standard)
test_past_key_values_format
python
huggingface/transformers
tests/models/deepseek_v3/test_modeling_deepseek_v3.py
https://github.com/huggingface/transformers/blob/master/tests/models/deepseek_v3/test_modeling_deepseek_v3.py
Apache-2.0
def test_eager_matches_sdpa_generate(self): """ Overwriting the common test as the test is flaky on tiny models """ max_new_tokens = 30 tokenizer = AutoTokenizer.from_pretrained("bzantium/tiny-deepseek-v3") model_sdpa = DeepseekV3ForCausalLM.from_pretrained( ...
Overwriting the common test as the test is flaky on tiny models
test_eager_matches_sdpa_generate
python
huggingface/transformers
tests/models/deepseek_v3/test_modeling_deepseek_v3.py
https://github.com/huggingface/transformers/blob/master/tests/models/deepseek_v3/test_modeling_deepseek_v3.py
Apache-2.0
def test_flex_attention_with_grads(self): """ Overwriting as the namings/functionality on the attention part are different; for now it's more of a unique model. Original issue is also due to dimensionalities, here specifically due to dims not being a multiple of 2. """ for model_...
Overwriting as the namings/functionality on the attention part are different; for now it's more of a unique model. Original issue is also due to dimensionalities, here specifically due to dims not being a multiple of 2.
test_flex_attention_with_grads
python
huggingface/transformers
tests/models/deepseek_v3/test_modeling_deepseek_v3.py
https://github.com/huggingface/transformers/blob/master/tests/models/deepseek_v3/test_modeling_deepseek_v3.py
Apache-2.0
def get_expected_values(self, image_inputs, batched=False): """ This function computes the expected height and width when providing images to DeformableDetrImageProcessor, assuming do_resize is set to True with a scalar size. """ if not batched: image = image_inputs[0...
This function computes the expected height and width when providing images to DeformableDetrImageProcessor, assuming do_resize is set to True with a scalar size.
get_expected_values
python
huggingface/transformers
tests/models/deformable_detr/test_image_processing_deformable_detr.py
https://github.com/huggingface/transformers/blob/master/tests/models/deformable_detr/test_image_processing_deformable_detr.py
Apache-2.0
def test_inference_fp16(self): r""" A small test to make sure that inference work in half precision without any problem. """ model = DeiTModel.from_pretrained( "facebook/deit-base-distilled-patch16-224", torch_dtype=torch.float16, device_map="auto" ) image_pro...
A small test to make sure that inference work in half precision without any problem.
test_inference_fp16
python
huggingface/transformers
tests/models/deit/test_modeling_deit.py
https://github.com/huggingface/transformers/blob/master/tests/models/deit/test_modeling_deit.py
Apache-2.0
def get_expected_values(self, image_inputs, batched=False): """ This function computes the expected height and width when providing images to DetrImageProcessor, assuming do_resize is set to True with a scalar size. """ if not batched: image = image_inputs[0] ...
This function computes the expected height and width when providing images to DetrImageProcessor, assuming do_resize is set to True with a scalar size.
get_expected_values
python
huggingface/transformers
tests/models/detr/test_image_processing_detr.py
https://github.com/huggingface/transformers/blob/master/tests/models/detr/test_image_processing_detr.py
Apache-2.0
def test_flash_attn_2_generate_padding_right(self): """ Overwriting the common test as the test is flaky on tiny models """ model = DiffLlamaForCausalLM.from_pretrained( "kajuma/DiffLlama-0.3B-handcut", load_in_4bit=True, device_map={"": 0}, ) ...
Overwriting the common test as the test is flaky on tiny models
test_flash_attn_2_generate_padding_right
python
huggingface/transformers
tests/models/diffllama/test_modeling_diffllama.py
https://github.com/huggingface/transformers/blob/master/tests/models/diffllama/test_modeling_diffllama.py
Apache-2.0
def test_use_flash_attention_2_true(self): """ NOTE: this is the only test testing that the legacy `use_flash_attention=2` argument still works as intended. """ config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classe...
NOTE: this is the only test testing that the legacy `use_flash_attention=2` argument still works as intended.
test_use_flash_attention_2_true
python
huggingface/transformers
tests/models/diffllama/test_modeling_diffllama.py
https://github.com/huggingface/transformers/blob/master/tests/models/diffllama/test_modeling_diffllama.py
Apache-2.0
def test_eager_matches_sdpa_generate(self): """ Overwriting the common test as the test is flaky on tiny models """ max_new_tokens = 30 tokenizer = AutoTokenizer.from_pretrained("kajuma/DiffLlama-0.3B-handcut") model_sdpa = DiffLlamaForCausalLM.from_pretrained( ...
Overwriting the common test as the test is flaky on tiny models
test_eager_matches_sdpa_generate
python
huggingface/transformers
tests/models/diffllama/test_modeling_diffllama.py
https://github.com/huggingface/transformers/blob/master/tests/models/diffllama/test_modeling_diffllama.py
Apache-2.0
def test_stacked_causal_mask_static_cache(self): """same as above but with StaticCache""" ( input_ids, position_ids, input_ids_shared_prefix, mask_shared_prefix, position_ids_shared_prefix, ) = self.get_test_data() # regular ba...
same as above but with StaticCache
test_stacked_causal_mask_static_cache
python
huggingface/transformers
tests/models/diffllama/test_modeling_diffllama.py
https://github.com/huggingface/transformers/blob/master/tests/models/diffllama/test_modeling_diffllama.py
Apache-2.0
def test_create_position_ids_respects_padding_index(self): """This is a regression test for https://github.com/huggingface/transformers/issues/1761 The position ids should be masked with the embedding object's padding index. Therefore, the first available non-padding position index is EsmEmbedd...
This is a regression test for https://github.com/huggingface/transformers/issues/1761 The position ids should be masked with the embedding object's padding index. Therefore, the first available non-padding position index is EsmEmbeddings.padding_idx + 1
test_create_position_ids_respects_padding_index
python
huggingface/transformers
tests/models/esm/test_modeling_esm.py
https://github.com/huggingface/transformers/blob/master/tests/models/esm/test_modeling_esm.py
Apache-2.0
def test_create_position_ids_from_inputs_embeds(self): """This is a regression test for https://github.com/huggingface/transformers/issues/1761 The position ids should be masked with the embedding object's padding index. Therefore, the first available non-padding position index is EsmEmbeddings...
This is a regression test for https://github.com/huggingface/transformers/issues/1761 The position ids should be masked with the embedding object's padding index. Therefore, the first available non-padding position index is EsmEmbeddings.padding_idx + 1
test_create_position_ids_from_inputs_embeds
python
huggingface/transformers
tests/models/esm/test_modeling_esm.py
https://github.com/huggingface/transformers/blob/master/tests/models/esm/test_modeling_esm.py
Apache-2.0
def test_attention_outputs(self): r""" Overriding the test_attention_outputs test as the FalconH1 model outputs attention only for its attention layers """ config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True seq_len = ...
Overriding the test_attention_outputs test as the FalconH1 model outputs attention only for its attention layers
test_attention_outputs
python
huggingface/transformers
tests/models/falcon_h1/test_modeling_falcon_h1.py
https://github.com/huggingface/transformers/blob/master/tests/models/falcon_h1/test_modeling_falcon_h1.py
Apache-2.0
def assertInterval(self, member, container, msg=None): r""" Simple utility function to check if a member is inside an interval. """ if isinstance(member, torch.Tensor): max_value, min_value = member.max().item(), member.min().item() elif isinstance(member, list) or is...
Simple utility function to check if a member is inside an interval.
assertInterval
python
huggingface/transformers
tests/models/falcon_mamba/test_modeling_falcon_mamba.py
https://github.com/huggingface/transformers/blob/master/tests/models/falcon_mamba/test_modeling_falcon_mamba.py
Apache-2.0
def test_attention_outputs(self): """ Custom `test_attention_outputs` since FastSpeech2Conformer does not output cross attentions, has variable decoder attention shape, and uniquely outputs energy, pitch, and durations. """ config, inputs_dict = self.model_tester.prepare_config_a...
Custom `test_attention_outputs` since FastSpeech2Conformer does not output cross attentions, has variable decoder attention shape, and uniquely outputs energy, pitch, and durations.
test_attention_outputs
python
huggingface/transformers
tests/models/fastspeech2_conformer/test_modeling_fastspeech2_conformer.py
https://github.com/huggingface/transformers/blob/master/tests/models/fastspeech2_conformer/test_modeling_fastspeech2_conformer.py
Apache-2.0
def test_attention_outputs(self): """ Custom `test_attention_outputs` since FastSpeech2Conformer does not output cross attentions, has variable decoder attention shape, and uniquely outputs energy, pitch, and durations. """ config, inputs_dict = self.model_tester.prepare_config_a...
Custom `test_attention_outputs` since FastSpeech2Conformer does not output cross attentions, has variable decoder attention shape, and uniquely outputs energy, pitch, and durations.
test_attention_outputs
python
huggingface/transformers
tests/models/fastspeech2_conformer/test_modeling_fastspeech2_conformer.py
https://github.com/huggingface/transformers/blob/master/tests/models/fastspeech2_conformer/test_modeling_fastspeech2_conformer.py
Apache-2.0
def test_inference_for_masked_lm(self): """ For comparison: 1. Modify the pre-training model `__call__` to skip computing metrics and return masked_lm_output like so: ``` ... sequence_output, pooled_output = EncoderModel( self.config, random_seed=s...
For comparison: 1. Modify the pre-training model `__call__` to skip computing metrics and return masked_lm_output like so: ``` ... sequence_output, pooled_output = EncoderModel( self.config, random_seed=self.random_seed, name="encoder")( i...
test_inference_for_masked_lm
python
huggingface/transformers
tests/models/fnet/test_modeling_fnet.py
https://github.com/huggingface/transformers/blob/master/tests/models/fnet/test_modeling_fnet.py
Apache-2.0
def _assert_tensors_equal(a, b, atol=1e-12, prefix=""): """If tensors not close, or a and b aren't both tensors, raise a nice Assertion error.""" if a is None and b is None: return True try: if torch.allclose(a, b, atol=atol): return True raise except Exception: ...
If tensors not close, or a and b aren't both tensors, raise a nice Assertion error.
_assert_tensors_equal
python
huggingface/transformers
tests/models/fsmt/test_modeling_fsmt.py
https://github.com/huggingface/transformers/blob/master/tests/models/fsmt/test_modeling_fsmt.py
Apache-2.0
def test_online_tokenizer_config(self): """this just tests that the online tokenizer files get correctly fetched and loaded via its tokenizer_config.json and it's not slow so it's run by normal CI """ tokenizer = FSMTTokenizer.from_pretrained(FSMT_TINY2) self.assertListEqual([tok...
this just tests that the online tokenizer files get correctly fetched and loaded via its tokenizer_config.json and it's not slow so it's run by normal CI
test_online_tokenizer_config
python
huggingface/transformers
tests/models/fsmt/test_tokenization_fsmt.py
https://github.com/huggingface/transformers/blob/master/tests/models/fsmt/test_tokenization_fsmt.py
Apache-2.0
def test_full_tokenizer(self): """Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt""" tokenizer = FSMTTokenizer(self.langs, self.src_vocab_file, self.tgt_vocab_file, self.merges_file) text = "lower" bpe_tokens = ["low", "er</w>"] tokens = tokenizer....
Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt
test_full_tokenizer
python
huggingface/transformers
tests/models/fsmt/test_tokenization_fsmt.py
https://github.com/huggingface/transformers/blob/master/tests/models/fsmt/test_tokenization_fsmt.py
Apache-2.0
def test_fuyu_processing(self): """ Test to ensure that the standard processing on a gold example matches adept's code. """ # fmt: off EXPECTED_IMAGE_PATCH_INPUTS = torch.Tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, -1, 22, 23, 24, 25, 26...
Test to ensure that the standard processing on a gold example matches adept's code.
test_fuyu_processing
python
huggingface/transformers
tests/models/fuyu/test_processor_fuyu.py
https://github.com/huggingface/transformers/blob/master/tests/models/fuyu/test_processor_fuyu.py
Apache-2.0
def test_fuyu_processing_no_image(self): """ Test to check processor works with just text input """ processor_outputs = self.get_processor()(text=self.text_prompt) tokenizer_outputs = self.get_tokenizer()(self.text_prompt) self.assertEqual(processor_outputs["input_ids"], ...
Test to check processor works with just text input
test_fuyu_processing_no_image
python
huggingface/transformers
tests/models/fuyu/test_processor_fuyu.py
https://github.com/huggingface/transformers/blob/master/tests/models/fuyu/test_processor_fuyu.py
Apache-2.0
def test_fuyu_processing_no_text(self): """ Test to check processor works with just image input """ # fmt: off EXPECTED_IMAGE_PATCH_INPUTS = torch.Tensor([ [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, ...
Test to check processor works with just image input
test_fuyu_processing_no_text
python
huggingface/transformers
tests/models/fuyu/test_processor_fuyu.py
https://github.com/huggingface/transformers/blob/master/tests/models/fuyu/test_processor_fuyu.py
Apache-2.0
def test_fuyu_processing_multiple_image_sample(self): """ Test to check processor works with multiple image inputs for a single text input """ # fmt: off SINGLE_IMAGE_PATCH_INPUTS = torch.Tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, -1, 2...
Test to check processor works with multiple image inputs for a single text input
test_fuyu_processing_multiple_image_sample
python
huggingface/transformers
tests/models/fuyu/test_processor_fuyu.py
https://github.com/huggingface/transformers/blob/master/tests/models/fuyu/test_processor_fuyu.py
Apache-2.0
def setUp(self): """ Adding a mix of present and absent images. """ self.image_input = torch.randn([1, 1, 3, 64, 64]) self.image_present = torch.tensor([[1]]) self.image_unpadded_h = torch.tensor([[45]]) # Adjusted for subsequence of 1 self.image_unpadded_w = to...
Adding a mix of present and absent images.
setUp
python
huggingface/transformers
tests/models/fuyu/test_processor_fuyu.py
https://github.com/huggingface/transformers/blob/master/tests/models/fuyu/test_processor_fuyu.py
Apache-2.0
def test_generation_beyond_sliding_window(self, attn_implementation: str): """Test that we can correctly generate beyond the sliding window. This is non trivial as we need to correctly slice the attention mask in all cases (because we use a HybridCache). Outputs for every attention functions sho...
Test that we can correctly generate beyond the sliding window. This is non trivial as we need to correctly slice the attention mask in all cases (because we use a HybridCache). Outputs for every attention functions should be coherent and identical.
test_generation_beyond_sliding_window
python
huggingface/transformers
tests/models/gemma2/test_modeling_gemma2.py
https://github.com/huggingface/transformers/blob/master/tests/models/gemma2/test_modeling_gemma2.py
Apache-2.0
def test_pan_and_scan(self): """ Enables Pan and Scan path by choosing the correct input image resolution. If you are changing image processor attributes for PaS, please update this test. """ for image_processing_class in self.image_processor_list: # Initialize image_...
Enables Pan and Scan path by choosing the correct input image resolution. If you are changing image processor attributes for PaS, please update this test.
test_pan_and_scan
python
huggingface/transformers
tests/models/gemma3/test_image_processing_gemma3.py
https://github.com/huggingface/transformers/blob/master/tests/models/gemma3/test_image_processing_gemma3.py
Apache-2.0
def test_automodelforcausallm(self): """ Regression test for #36741/#36917 -- make sure `AutoModelForCausalLM` works with a Gemma3 config, i.e. that `AutoModelForCausalLM.from_pretrained` pulls the text config before loading the model """ config = self.model_tester.get_config() ...
Regression test for #36741/#36917 -- make sure `AutoModelForCausalLM` works with a Gemma3 config, i.e. that `AutoModelForCausalLM.from_pretrained` pulls the text config before loading the model
test_automodelforcausallm
python
huggingface/transformers
tests/models/gemma3/test_modeling_gemma3.py
https://github.com/huggingface/transformers/blob/master/tests/models/gemma3/test_modeling_gemma3.py
Apache-2.0