code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def test_inference_fp16(self):
r"""
A small test to make sure that inference work in half precision without any problem.
"""
model = PvtForImageClassification.from_pretrained("Zetatech/pvt-tiny-224", torch_dtype=torch.float16)
model.to(torch_device)
image_processor = PvtI... |
A small test to make sure that inference work in half precision without any problem.
| test_inference_fp16 | python | huggingface/transformers | tests/models/pvt/test_modeling_pvt.py | https://github.com/huggingface/transformers/blob/master/tests/models/pvt/test_modeling_pvt.py | Apache-2.0 |
def test_inference_fp16(self):
r"""
A small test to make sure that inference work in half precision without any problem.
"""
model = PvtV2ForImageClassification.from_pretrained("OpenGVLab/pvt_v2_b0", torch_dtype=torch.float16)
model.to(torch_device)
image_processor = Auto... |
A small test to make sure that inference work in half precision without any problem.
| test_inference_fp16 | python | huggingface/transformers | tests/models/pvt_v2/test_modeling_pvt_v2.py | https://github.com/huggingface/transformers/blob/master/tests/models/pvt_v2/test_modeling_pvt_v2.py | Apache-2.0 |
def test_apply_chat_template_video_special_processing(self):
"""
Tests that models can use their own preprocessing to preprocess conversations.
"""
processor = self.get_processor()
if processor.chat_template is None:
self.skipTest("Processor has no chat template")
... |
Tests that models can use their own preprocessing to preprocess conversations.
| test_apply_chat_template_video_special_processing | python | huggingface/transformers | tests/models/qwen2_5_omni/test_processor_qwen2_5_omni.py | https://github.com/huggingface/transformers/blob/master/tests/models/qwen2_5_omni/test_processor_qwen2_5_omni.py | Apache-2.0 |
def test_mismatching_num_image_tokens(self):
"""
Tests that VLMs through an error with explicit message saying what is wrong
when number of images don't match number of image tokens in the text.
Also we need to test multi-image cases when one prompr has multiple image tokens.
"""... |
Tests that VLMs through an error with explicit message saying what is wrong
when number of images don't match number of image tokens in the text.
Also we need to test multi-image cases when one prompr has multiple image tokens.
| test_mismatching_num_image_tokens | python | huggingface/transformers | tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py | https://github.com/huggingface/transformers/blob/master/tests/models/qwen2_5_vl/test_modeling_qwen2_5_vl.py | Apache-2.0 |
def test_apply_chat_template_video_special_processing(self):
"""
Tests that models can use their own preprocessing to preprocess conversations.
"""
processor = self.get_processor()
if processor.chat_template is None:
self.skipTest("Processor has no chat template")
... |
Tests that models can use their own preprocessing to preprocess conversations.
| test_apply_chat_template_video_special_processing | python | huggingface/transformers | tests/models/qwen2_5_vl/test_processor_qwen2_5_vl.py | https://github.com/huggingface/transformers/blob/master/tests/models/qwen2_5_vl/test_processor_qwen2_5_vl.py | Apache-2.0 |
def test_load_balancing_loss(self):
r"""
Let's make sure we can actually compute the loss and do a backward on it.
"""
config, input_dict = self.model_tester.prepare_config_and_inputs_for_common()
config.num_labels = 3
config.num_experts = 8
config.expert_interval... |
Let's make sure we can actually compute the loss and do a backward on it.
| test_load_balancing_loss | python | huggingface/transformers | tests/models/qwen2_moe/test_modeling_qwen2_moe.py | https://github.com/huggingface/transformers/blob/master/tests/models/qwen2_moe/test_modeling_qwen2_moe.py | Apache-2.0 |
def test_mismatching_num_image_tokens(self):
"""
Tests that VLMs through an error with explicit message saying what is wrong
when number of images don't match number of image tokens in the text.
Also we need to test multi-image cases when one prompt has multiple image tokens.
"""... |
Tests that VLMs through an error with explicit message saying what is wrong
when number of images don't match number of image tokens in the text.
Also we need to test multi-image cases when one prompt has multiple image tokens.
| test_mismatching_num_image_tokens | python | huggingface/transformers | tests/models/qwen2_vl/test_modeling_qwen2_vl.py | https://github.com/huggingface/transformers/blob/master/tests/models/qwen2_vl/test_modeling_qwen2_vl.py | Apache-2.0 |
def test_forward_with_rope_deltas_cached(self):
"""
Tests that Qwen2-VL computes new rope deltas every forward pass with new set of inputs.
Rope deltas are cached when we generate and re-used for decoding phase, byt are not reset
automatically after generation ends. See https://github.co... |
Tests that Qwen2-VL computes new rope deltas every forward pass with new set of inputs.
Rope deltas are cached when we generate and re-used for decoding phase, byt are not reset
automatically after generation ends. See https://github.com/huggingface/transformers/pull/36013 for more
| test_forward_with_rope_deltas_cached | python | huggingface/transformers | tests/models/qwen2_vl/test_modeling_qwen2_vl.py | https://github.com/huggingface/transformers/blob/master/tests/models/qwen2_vl/test_modeling_qwen2_vl.py | Apache-2.0 |
def test_apply_chat_template_video_special_processing(self):
"""
Tests that models can use their own preprocessing to preprocess conversations.
"""
processor = self.get_processor()
if processor.chat_template is None:
self.skipTest("Processor has no chat template")
... |
Tests that models can use their own preprocessing to preprocess conversations.
| test_apply_chat_template_video_special_processing | python | huggingface/transformers | tests/models/qwen2_vl/test_processor_qwen2_vl.py | https://github.com/huggingface/transformers/blob/master/tests/models/qwen2_vl/test_processor_qwen2_vl.py | Apache-2.0 |
def test_special_mm_token_truncation(self):
"""Tests that special vision tokens do not get truncated when `truncation=True` is set."""
processor = self.get_processor()
input_str = self.prepare_text_inputs(batch_size=2, modality="image")
image_input = self.prepare_image_inputs(batch_siz... | Tests that special vision tokens do not get truncated when `truncation=True` is set. | test_special_mm_token_truncation | python | huggingface/transformers | tests/models/qwen2_vl/test_processor_qwen2_vl.py | https://github.com/huggingface/transformers/blob/master/tests/models/qwen2_vl/test_processor_qwen2_vl.py | Apache-2.0 |
def test_nested_input(self):
"""Tests that the processor can work with nested list where each video is a list of arrays"""
for video_processing_class in self.video_processor_list:
video_processing = video_processing_class(**self.video_processor_dict)
video_inputs = self.video_pro... | Tests that the processor can work with nested list where each video is a list of arrays | test_nested_input | python | huggingface/transformers | tests/models/qwen2_vl/test_video_processing_qwen2_vl.py | https://github.com/huggingface/transformers/blob/master/tests/models/qwen2_vl/test_video_processing_qwen2_vl.py | Apache-2.0 |
def test_load_balancing_loss(self):
r"""
Let's make sure we can actually compute the loss and do a backward on it.
"""
config, input_dict = self.model_tester.prepare_config_and_inputs_for_common()
config.num_labels = 3
config.num_experts = 8
config.expert_interval... |
Let's make sure we can actually compute the loss and do a backward on it.
| test_load_balancing_loss | python | huggingface/transformers | tests/models/qwen3_moe/test_modeling_qwen3_moe.py | https://github.com/huggingface/transformers/blob/master/tests/models/qwen3_moe/test_modeling_qwen3_moe.py | Apache-2.0 |
def _assert_tensors_equal(a, b, atol=1e-12, prefix=""):
"""If tensors not close, or a and b aren't both tensors, raise a nice Assertion error."""
if a is None and b is None:
return True
try:
if torch.allclose(a, b, atol=atol):
return True
raise
except Exception:
... | If tensors not close, or a and b aren't both tensors, raise a nice Assertion error. | _assert_tensors_equal | python | huggingface/transformers | tests/models/rag/test_modeling_rag.py | https://github.com/huggingface/transformers/blob/master/tests/models/rag/test_modeling_rag.py | Apache-2.0 |
def require_retrieval(test_case):
"""
Decorator marking a test that requires a set of dependencies necessary for pefrorm retrieval with
[`RagRetriever`].
These tests are skipped when respective libraries are not installed.
"""
if not (is_torch_available() and is_datasets_available() and is_fai... |
Decorator marking a test that requires a set of dependencies necessary for pefrorm retrieval with
[`RagRetriever`].
These tests are skipped when respective libraries are not installed.
| require_retrieval | python | huggingface/transformers | tests/models/rag/test_modeling_rag.py | https://github.com/huggingface/transformers/blob/master/tests/models/rag/test_modeling_rag.py | Apache-2.0 |
def require_retrieval(test_case):
"""
Decorator marking a test that requires a set of dependencies necessary for pefrorm retrieval with
[`RagRetriever`].
These tests are skipped when respective libraries are not installed.
"""
if not (is_tf_available() and is_datasets_available() and is_faiss_... |
Decorator marking a test that requires a set of dependencies necessary for pefrorm retrieval with
[`RagRetriever`].
These tests are skipped when respective libraries are not installed.
| require_retrieval | python | huggingface/transformers | tests/models/rag/test_modeling_tf_rag.py | https://github.com/huggingface/transformers/blob/master/tests/models/rag/test_modeling_tf_rag.py | Apache-2.0 |
def test_create_position_ids_respects_padding_index(self):
"""This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is RobertaEm... | This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is RobertaEmbeddings.padding_idx + 1
| test_create_position_ids_respects_padding_index | python | huggingface/transformers | tests/models/roberta/test_modeling_roberta.py | https://github.com/huggingface/transformers/blob/master/tests/models/roberta/test_modeling_roberta.py | Apache-2.0 |
def test_create_position_ids_from_inputs_embeds(self):
"""This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is RobertaEmbedd... | This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is RobertaEmbeddings.padding_idx + 1
| test_create_position_ids_from_inputs_embeds | python | huggingface/transformers | tests/models/roberta/test_modeling_roberta.py | https://github.com/huggingface/transformers/blob/master/tests/models/roberta/test_modeling_roberta.py | Apache-2.0 |
def test_create_position_ids_respects_padding_index(self):
"""This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is RobertaPr... | This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is RobertaPreLayerNormEmbeddings.padding_idx + 1
| test_create_position_ids_respects_padding_index | python | huggingface/transformers | tests/models/roberta_prelayernorm/test_modeling_roberta_prelayernorm.py | https://github.com/huggingface/transformers/blob/master/tests/models/roberta_prelayernorm/test_modeling_roberta_prelayernorm.py | Apache-2.0 |
def test_create_position_ids_from_inputs_embeds(self):
"""This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is RobertaPreLay... | This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is RobertaPreLayerNormEmbeddings.padding_idx + 1
| test_create_position_ids_from_inputs_embeds | python | huggingface/transformers | tests/models/roberta_prelayernorm/test_modeling_roberta_prelayernorm.py | https://github.com/huggingface/transformers/blob/master/tests/models/roberta_prelayernorm/test_modeling_roberta_prelayernorm.py | Apache-2.0 |
def assertInterval(self, member, container, msg=None):
r"""
Simple utility function to check if a member is inside an interval.
"""
if isinstance(member, torch.Tensor):
max_value, min_value = member.max().item(), member.min().item()
elif isinstance(member, list) or is... |
Simple utility function to check if a member is inside an interval.
| assertInterval | python | huggingface/transformers | tests/models/rwkv/test_modeling_rwkv.py | https://github.com/huggingface/transformers/blob/master/tests/models/rwkv/test_modeling_rwkv.py | Apache-2.0 |
def test_attention_outputs(self):
r"""
Overriding the test_attention_outputs test as the attention outputs of Rwkv are different from other models
it has a shape `batch_size, seq_len, hidden_size`.
"""
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()... |
Overriding the test_attention_outputs test as the attention outputs of Rwkv are different from other models
it has a shape `batch_size, seq_len, hidden_size`.
| test_attention_outputs | python | huggingface/transformers | tests/models/rwkv/test_modeling_rwkv.py | https://github.com/huggingface/transformers/blob/master/tests/models/rwkv/test_modeling_rwkv.py | Apache-2.0 |
def test_sdpa_can_dispatch_composite_models(self):
"""
Tests if composite models dispatch correctly on SDPA/eager when requested so when loading the model.
This tests only by looking at layer names, as usually SDPA layers are called "SDPAAttention".
In contrast to the above test, this on... |
Tests if composite models dispatch correctly on SDPA/eager when requested so when loading the model.
This tests only by looking at layer names, as usually SDPA layers are called "SDPAAttention".
In contrast to the above test, this one checks if the "config._attn_implamentation" is a dict after ... | test_sdpa_can_dispatch_composite_models | python | huggingface/transformers | tests/models/sam/test_modeling_sam.py | https://github.com/huggingface/transformers/blob/master/tests/models/sam/test_modeling_sam.py | Apache-2.0 |
def prepare_mask_inputs(self):
"""This function prepares a list of PIL images, or a list of numpy arrays if one specifies numpify=True,
or a list of PyTorch tensors if one specifies torchify=True.
"""
mask_inputs = [np.random.randint(255, size=(30, 400), dtype=np.uint8)]
mask_inp... | This function prepares a list of PIL images, or a list of numpy arrays if one specifies numpify=True,
or a list of PyTorch tensors if one specifies torchify=True.
| prepare_mask_inputs | python | huggingface/transformers | tests/models/sam/test_processor_sam.py | https://github.com/huggingface/transformers/blob/master/tests/models/sam/test_processor_sam.py | Apache-2.0 |
def test_rle_encoding(self):
"""
Test the run-length encoding function.
"""
# Test that a mask of all zeros returns a single run [height * width].
input_mask = torch.zeros((1, 2, 2), dtype=torch.long) # shape: 1 x 2 x 2
rle = _mask_to_rle_pytorch(input_mask)
sel... |
Test the run-length encoding function.
| test_rle_encoding | python | huggingface/transformers | tests/models/sam/test_processor_sam.py | https://github.com/huggingface/transformers/blob/master/tests/models/sam/test_processor_sam.py | Apache-2.0 |
def test_rle_encoding(self):
"""
Test the run-length encoding function.
"""
# Test that a mask of all zeros returns a single run [height * width].
input_mask = tf.zeros((1, 2, 2), dtype=tf.int64) # shape: 1 x 2 x 2
rle = _mask_to_rle_tf(input_mask)
self.assertEq... |
Test the run-length encoding function.
| test_rle_encoding | python | huggingface/transformers | tests/models/sam/test_processor_sam.py | https://github.com/huggingface/transformers/blob/master/tests/models/sam/test_processor_sam.py | Apache-2.0 |
def test_sdpa_can_dispatch_composite_models(self):
"""
Tests if composite models dispatch correctly on SDPA/eager when requested so when loading the model.
This tests only by looking at layer names, as usually SDPA layers are calles "SDPAAttention".
In contrast to the above test, this on... |
Tests if composite models dispatch correctly on SDPA/eager when requested so when loading the model.
This tests only by looking at layer names, as usually SDPA layers are calles "SDPAAttention".
In contrast to the above test, this one checks if the "config._attn_implamentation" is a dict after ... | test_sdpa_can_dispatch_composite_models | python | huggingface/transformers | tests/models/sam_hq/test_modeling_sam_hq.py | https://github.com/huggingface/transformers/blob/master/tests/models/sam_hq/test_modeling_sam_hq.py | Apache-2.0 |
def prepare_mask_inputs(self):
"""This function prepares a list of PIL images, or a list of numpy arrays if one specifies numpify=True,
or a list of PyTorch tensors if one specifies torchify=True.
"""
mask_inputs = [np.random.randint(255, size=(30, 400), dtype=np.uint8)]
mask_inp... | This function prepares a list of PIL images, or a list of numpy arrays if one specifies numpify=True,
or a list of PyTorch tensors if one specifies torchify=True.
| prepare_mask_inputs | python | huggingface/transformers | tests/models/sam_hq/test_processor_samhq.py | https://github.com/huggingface/transformers/blob/master/tests/models/sam_hq/test_processor_samhq.py | Apache-2.0 |
def prepare_image_inputs(
self,
batch_size=None,
min_resolution=None,
max_resolution=None,
num_channels=None,
num_images=None,
size_divisor=None,
equal_resolution=False,
numpify=False,
torchify=False,
):
"""This function prepare... | This function prepares a list of PIL images, or a list of numpy arrays if one specifies numpify=True,
or a list of PyTorch tensors if one specifies torchify=True.
One can specify whether the images are of the same resolution or not.
| prepare_image_inputs | python | huggingface/transformers | tests/models/smolvlm/test_image_processing_smolvlm.py | https://github.com/huggingface/transformers/blob/master/tests/models/smolvlm/test_image_processing_smolvlm.py | Apache-2.0 |
def test_text_only_inference(self):
"""Test that the processor works correctly with text-only input."""
processor_components = self.prepare_components()
processor_components["tokenizer"] = self.get_component("tokenizer", padding_side="left")
processor_kwargs = self.prepare_processor_dict... | Test that the processor works correctly with text-only input. | test_text_only_inference | python | huggingface/transformers | tests/models/smolvlm/test_processor_smolvlm.py | https://github.com/huggingface/transformers/blob/master/tests/models/smolvlm/test_processor_smolvlm.py | Apache-2.0 |
def test_missing_images_error(self):
"""Test that appropriate error is raised when images are referenced but not provided."""
processor = self.get_processor()
# Test single text with image token but no image
text = "Let me show you this image: <image> What do you think?"
with se... | Test that appropriate error is raised when images are referenced but not provided. | test_missing_images_error | python | huggingface/transformers | tests/models/smolvlm/test_processor_smolvlm.py | https://github.com/huggingface/transformers/blob/master/tests/models/smolvlm/test_processor_smolvlm.py | Apache-2.0 |
def test_special_mm_token_truncation(self):
"""Tests that special vision tokens do not get truncated when `truncation=True` is set."""
processor = self.get_processor()
input_str = self.prepare_text_inputs(batch_size=2, modality="image")
image_input = self.prepare_image_inputs(batch_siz... | Tests that special vision tokens do not get truncated when `truncation=True` is set. | test_special_mm_token_truncation | python | huggingface/transformers | tests/models/smolvlm/test_processor_smolvlm.py | https://github.com/huggingface/transformers/blob/master/tests/models/smolvlm/test_processor_smolvlm.py | Apache-2.0 |
def get_subsampled_output_lengths(self, input_lengths):
"""
Computes the output length of the convolutional layers
"""
for stride in self.conv_stride:
input_lengths = (input_lengths // stride) - 1
return input_lengths |
Computes the output length of the convolutional layers
| get_subsampled_output_lengths | python | huggingface/transformers | tests/models/speecht5/test_modeling_speecht5.py | https://github.com/huggingface/transformers/blob/master/tests/models/speecht5/test_modeling_speecht5.py | Apache-2.0 |
def get_subsampled_output_lengths(self, input_lengths):
"""
Computes the output length of the convolutional layers
"""
for i in range(self.num_conv_layers):
input_lengths = (input_lengths - 1) // 2 + 1
return input_lengths |
Computes the output length of the convolutional layers
| get_subsampled_output_lengths | python | huggingface/transformers | tests/models/speech_to_text/test_modeling_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/tests/models/speech_to_text/test_modeling_speech_to_text.py | Apache-2.0 |
def get_subsampled_output_lengths(self, input_lengths):
"""
Computes the output length of the convolutional layers
"""
for _ in range(self.num_conv_layers):
input_lengths = (input_lengths - 1) // 2 + 1
return input_lengths |
Computes the output length of the convolutional layers
| get_subsampled_output_lengths | python | huggingface/transformers | tests/models/speech_to_text/test_modeling_tf_speech_to_text.py | https://github.com/huggingface/transformers/blob/master/tests/models/speech_to_text/test_modeling_tf_speech_to_text.py | Apache-2.0 |
def test_batching_equivalence(self):
"""
Overwriting ModelTesterMixin.test_batching_equivalence since SuperGlue returns `matching_scores` tensors full of
zeros which causes the test to fail, because cosine_similarity of two zero tensors is 0.
Discussed here : https://github.com/huggingfa... |
Overwriting ModelTesterMixin.test_batching_equivalence since SuperGlue returns `matching_scores` tensors full of
zeros which causes the test to fail, because cosine_similarity of two zero tensors is 0.
Discussed here : https://github.com/huggingface/transformers/pull/29886#issuecomment-24815394... | test_batching_equivalence | python | huggingface/transformers | tests/models/superglue/test_modeling_superglue.py | https://github.com/huggingface/transformers/blob/master/tests/models/superglue/test_modeling_superglue.py | Apache-2.0 |
def create_and_check_generate_with_past_key_values(
self,
config,
input_ids,
decoder_input_ids,
attention_mask,
decoder_attention_mask,
lm_labels,
):
r"""
This test does not pass for small models due to precision errors. It is therefore only ru... |
This test does not pass for small models due to precision errors. It is therefore only run for slightly larger models.
| create_and_check_generate_with_past_key_values | python | huggingface/transformers | tests/models/switch_transformers/test_modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/tests/models/switch_transformers/test_modeling_switch_transformers.py | Apache-2.0 |
def test_equivalency_balancy_loss(self):
r"""
This test checks if the balancy loss is correctly implemented
as in the original implementation of the Switch Transformer .
"""
router_probs = torch.Tensor(
[
[0.35490513, 0.60419905],
[0.42... |
This test checks if the balancy loss is correctly implemented
as in the original implementation of the Switch Transformer .
| test_equivalency_balancy_loss | python | huggingface/transformers | tests/models/switch_transformers/test_modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/tests/models/switch_transformers/test_modeling_switch_transformers.py | Apache-2.0 |
def test_equivalency_router_z_loss(self):
r"""
This test checks if the router z loss is correctly implemented
as in the original implementation of the Switch Transformer .
"""
logits = torch.Tensor(
[
[
[-4.2124424, 3.891939, -3.648... |
This test checks if the router z loss is correctly implemented
as in the original implementation of the Switch Transformer .
| test_equivalency_router_z_loss | python | huggingface/transformers | tests/models/switch_transformers/test_modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/tests/models/switch_transformers/test_modeling_switch_transformers.py | Apache-2.0 |
def test_equivalency_token_chose_masked_router(self):
r"""
This test tests the equivalency between the `SwitchTransformersTop1Router`
originally implemented from here: TODO: provide link
"""
input_tokens = torch.Tensor(
[
[
[0.6433... |
This test tests the equivalency between the `SwitchTransformersTop1Router`
originally implemented from here: TODO: provide link
| test_equivalency_token_chose_masked_router | python | huggingface/transformers | tests/models/switch_transformers/test_modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/tests/models/switch_transformers/test_modeling_switch_transformers.py | Apache-2.0 |
def test_small_logits(self):
r"""
Logits testing to check implementation consistency between `t5x` implementation
and `transformers` implementation of Switch-C transformers. We only check the logits
of the first batch.
"""
model = SwitchTransformersModel.from_pretrained("... |
Logits testing to check implementation consistency between `t5x` implementation
and `transformers` implementation of Switch-C transformers. We only check the logits
of the first batch.
| test_small_logits | python | huggingface/transformers | tests/models/switch_transformers/test_modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/tests/models/switch_transformers/test_modeling_switch_transformers.py | Apache-2.0 |
def test_token_dropping(self):
r"""
This test checks if the token dropping actually drops tokens.
"""
config = SwitchTransformersConfig(expert_capacity=0) # we drop everything
moe = SwitchTransformersSparseMLP(config)
dropped_token_results = moe(torch.randn(2, 3, 768))[0... |
This test checks if the token dropping actually drops tokens.
| test_token_dropping | python | huggingface/transformers | tests/models/switch_transformers/test_modeling_switch_transformers.py | https://github.com/huggingface/transformers/blob/master/tests/models/switch_transformers/test_modeling_switch_transformers.py | Apache-2.0 |
def test_small_integration_test(self):
"""
For comparison run:
>>> import t5 # pip install t5==0.7.1
>>> from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary
>>> path_to_mtf_small_t5_checkpoint = '<fill_in>'
>>> path_to_mtf_small_spm_model_path = '<fill_... |
For comparison run:
>>> import t5 # pip install t5==0.7.1
>>> from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary
>>> path_to_mtf_small_t5_checkpoint = '<fill_in>'
>>> path_to_mtf_small_spm_model_path = '<fill_in>'
>>> t5_model = t5.models.MtfModel(mod... | test_small_integration_test | python | huggingface/transformers | tests/models/t5/test_modeling_flax_t5.py | https://github.com/huggingface/transformers/blob/master/tests/models/t5/test_modeling_flax_t5.py | Apache-2.0 |
def test_small_v1_1_integration_test(self):
"""
For comparison run:
>>> import t5 # pip install t5==0.7.1
>>> from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary
>>> path_to_mtf_small_t5_v1_1_checkpoint = '<fill_in>'
>>> path_to_mtf_small_spm_model_path... |
For comparison run:
>>> import t5 # pip install t5==0.7.1
>>> from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary
>>> path_to_mtf_small_t5_v1_1_checkpoint = '<fill_in>'
>>> path_to_mtf_small_spm_model_path = '<fill_in>'
>>> t5_model = t5.models.MtfMode... | test_small_v1_1_integration_test | python | huggingface/transformers | tests/models/t5/test_modeling_flax_t5.py | https://github.com/huggingface/transformers/blob/master/tests/models/t5/test_modeling_flax_t5.py | Apache-2.0 |
def test_small_byt5_integration_test(self):
"""
For comparison run:
>>> import t5 # pip install t5==0.9.1
>>> path_to_byt5_small_checkpoint = '<fill_in>'
>>> t5_model = t5.models.MtfModel(model_dir=path_to_tf_checkpoint, batch_size=1, tpu=None)
>>> vocab = t5.data.ByteV... |
For comparison run:
>>> import t5 # pip install t5==0.9.1
>>> path_to_byt5_small_checkpoint = '<fill_in>'
>>> t5_model = t5.models.MtfModel(model_dir=path_to_tf_checkpoint, batch_size=1, tpu=None)
>>> vocab = t5.data.ByteVocabulary()
>>> score = t5_model.score(inputs=[... | test_small_byt5_integration_test | python | huggingface/transformers | tests/models/t5/test_modeling_flax_t5.py | https://github.com/huggingface/transformers/blob/master/tests/models/t5/test_modeling_flax_t5.py | Apache-2.0 |
def test_fp16_fp32_conversion(self):
r"""
A test to check whether the argument `keep_in_fp32_modules` correctly does its job
"""
orig_import = __import__
accelerate_mock = unittest.mock.Mock()
# mock import of accelerate
def import_accelerate_mock(name, *args, **... |
A test to check whether the argument `keep_in_fp32_modules` correctly does its job
| test_fp16_fp32_conversion | python | huggingface/transformers | tests/models/t5/test_modeling_t5.py | https://github.com/huggingface/transformers/blob/master/tests/models/t5/test_modeling_t5.py | Apache-2.0 |
def test_torch_quant(self):
r"""
Test that a simple `torch.quantization.quantize_dynamic` call works on a T5 model.
"""
model_name = "google/flan-t5-small"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
... |
Test that a simple `torch.quantization.quantize_dynamic` call works on a T5 model.
| test_torch_quant | python | huggingface/transformers | tests/models/t5/test_modeling_t5.py | https://github.com/huggingface/transformers/blob/master/tests/models/t5/test_modeling_t5.py | Apache-2.0 |
def test_small_integration_test(self):
"""
For comparison run:
>>> import t5 # pip install t5==0.7.1
>>> from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary
>>> path_to_mtf_small_t5_checkpoint = '<fill_in>'
>>> path_to_mtf_small_spm_model_path = '<fill_... |
For comparison run:
>>> import t5 # pip install t5==0.7.1
>>> from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary
>>> path_to_mtf_small_t5_checkpoint = '<fill_in>'
>>> path_to_mtf_small_spm_model_path = '<fill_in>'
>>> t5_model = t5.models.MtfModel(mod... | test_small_integration_test | python | huggingface/transformers | tests/models/t5/test_modeling_t5.py | https://github.com/huggingface/transformers/blob/master/tests/models/t5/test_modeling_t5.py | Apache-2.0 |
def test_small_v1_1_integration_test(self):
"""
For comparison run:
>>> import t5 # pip install t5==0.7.1
>>> from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary
>>> path_to_mtf_small_t5_v1_1_checkpoint = '<fill_in>'
>>> path_to_mtf_small_spm_model_path... |
For comparison run:
>>> import t5 # pip install t5==0.7.1
>>> from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary
>>> path_to_mtf_small_t5_v1_1_checkpoint = '<fill_in>'
>>> path_to_mtf_small_spm_model_path = '<fill_in>'
>>> t5_model = t5.models.MtfMode... | test_small_v1_1_integration_test | python | huggingface/transformers | tests/models/t5/test_modeling_t5.py | https://github.com/huggingface/transformers/blob/master/tests/models/t5/test_modeling_t5.py | Apache-2.0 |
def test_small_byt5_integration_test(self):
"""
For comparison run:
>>> import t5 # pip install t5==0.9.1
>>> path_to_byt5_small_checkpoint = '<fill_in>'
>>> t5_model = t5.models.MtfModel(model_dir=path_to_tf_checkpoint, batch_size=1, tpu=None)
>>> vocab = t5.data.ByteV... |
For comparison run:
>>> import t5 # pip install t5==0.9.1
>>> path_to_byt5_small_checkpoint = '<fill_in>'
>>> t5_model = t5.models.MtfModel(model_dir=path_to_tf_checkpoint, batch_size=1, tpu=None)
>>> vocab = t5.data.ByteVocabulary()
>>> score = t5_model.score(inputs=[... | test_small_byt5_integration_test | python | huggingface/transformers | tests/models/t5/test_modeling_t5.py | https://github.com/huggingface/transformers/blob/master/tests/models/t5/test_modeling_t5.py | Apache-2.0 |
def test_export_encoder(self):
"""Test exporting T5EncoderModel to torch export format."""
if not is_torch_greater_or_equal_than_2_4:
self.skipTest("This test requires torch >= 2.4 to run.")
from transformers.integrations.executorch import Seq2SeqLMEncoderExportableModule
m... | Test exporting T5EncoderModel to torch export format. | test_export_encoder | python | huggingface/transformers | tests/models/t5/test_modeling_t5.py | https://github.com/huggingface/transformers/blob/master/tests/models/t5/test_modeling_t5.py | Apache-2.0 |
def test_export_decoder(self):
"""Test exporting T5 decoder with static cache to torch export format."""
if not is_torch_greater_or_equal_than_2_4:
self.skipTest("This test requires torch >= 2.4 to run.")
from transformers import AutoModelForSeq2SeqLM, T5ForConditionalGeneration
... | Test exporting T5 decoder with static cache to torch export format. | test_export_decoder | python | huggingface/transformers | tests/models/t5/test_modeling_t5.py | https://github.com/huggingface/transformers/blob/master/tests/models/t5/test_modeling_t5.py | Apache-2.0 |
def test_export_t5_summarization(self):
"""Test composing exported T5 encoder and decoder for summarization."""
if not is_torch_greater_or_equal_than_2_4:
self.skipTest("This test requires torch >= 2.4 to run.")
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, T5ForCon... | Test composing exported T5 encoder and decoder for summarization. | test_export_t5_summarization | python | huggingface/transformers | tests/models/t5/test_modeling_t5.py | https://github.com/huggingface/transformers/blob/master/tests/models/t5/test_modeling_t5.py | Apache-2.0 |
def test_small_integration_test(self):
"""
For comparison run:
>>> import t5 # pip install t5==0.7.1
>>> from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary
>>> path_to_mtf_small_t5_checkpoint = '<fill_in>'
>>> path_to_mtf_small_spm_model_path = '<fill_... |
For comparison run:
>>> import t5 # pip install t5==0.7.1
>>> from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary
>>> path_to_mtf_small_t5_checkpoint = '<fill_in>'
>>> path_to_mtf_small_spm_model_path = '<fill_in>'
>>> t5_model = t5.models.MtfModel(mod... | test_small_integration_test | python | huggingface/transformers | tests/models/t5/test_modeling_tf_t5.py | https://github.com/huggingface/transformers/blob/master/tests/models/t5/test_modeling_tf_t5.py | Apache-2.0 |
def test_small_v1_1_integration_test(self):
"""
For comparison run:
>>> import t5 # pip install t5==0.7.1
>>> from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary
>>> path_to_mtf_small_t5_v1.1_checkpoint = '<fill_in>'
>>> path_to_mtf_small_spm_model_path... |
For comparison run:
>>> import t5 # pip install t5==0.7.1
>>> from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary
>>> path_to_mtf_small_t5_v1.1_checkpoint = '<fill_in>'
>>> path_to_mtf_small_spm_model_path = '<fill_in>'
>>> t5_model = t5.models.MtfMode... | test_small_v1_1_integration_test | python | huggingface/transformers | tests/models/t5/test_modeling_tf_t5.py | https://github.com/huggingface/transformers/blob/master/tests/models/t5/test_modeling_tf_t5.py | Apache-2.0 |
def test_small_byt5_integration_test(self):
"""
For comparison run:
>>> import t5 # pip install t5==0.9.1
>>> path_to_byt5_small_checkpoint = '<fill_in>'
>>> t5_model = t5.models.MtfModel(model_dir=path_to_tf_checkpoint, batch_size=1, tpu=None)
>>> vocab = t5.data.ByteV... |
For comparison run:
>>> import t5 # pip install t5==0.9.1
>>> path_to_byt5_small_checkpoint = '<fill_in>'
>>> t5_model = t5.models.MtfModel(model_dir=path_to_tf_checkpoint, batch_size=1, tpu=None)
>>> vocab = t5.data.ByteVocabulary()
>>> score = t5_model.score(inputs=[... | test_small_byt5_integration_test | python | huggingface/transformers | tests/models/t5/test_modeling_tf_t5.py | https://github.com/huggingface/transformers/blob/master/tests/models/t5/test_modeling_tf_t5.py | Apache-2.0 |
def _prepare_tables(self):
"""Prepares two tables, both with three distinct rows.
The first table has two columns:
1.0, 2.0 | 3.0
2.0, 0.0 | 1.0
1.0, 3.0 | 4.0
The second table has three columns:
1.0 | 2.0 | 3.0
2.0 | 0.0 | 1.0
1.0 | 3.0 | 4.0
... | Prepares two tables, both with three distinct rows.
The first table has two columns:
1.0, 2.0 | 3.0
2.0, 0.0 | 1.0
1.0, 3.0 | 4.0
The second table has three columns:
1.0 | 2.0 | 3.0
2.0 | 0.0 | 1.0
1.0 | 3.0 | 4.0
Returns:
SegmentedTensors ... | _prepare_tables | python | huggingface/transformers | tests/models/tapas/test_modeling_tapas.py | https://github.com/huggingface/transformers/blob/master/tests/models/tapas/test_modeling_tapas.py | Apache-2.0 |
def _prepare_tables(self):
"""Prepares two tables, both with three distinct rows.
The first table has two columns:
1.0, 2.0 | 3.0
2.0, 0.0 | 1.0
1.0, 3.0 | 4.0
The second table has three columns:
1.0 | 2.0 | 3.0
2.0 | 0.0 | 1.0
1.0 | 3.0 | 4.0
... | Prepares two tables, both with three distinct rows.
The first table has two columns:
1.0, 2.0 | 3.0
2.0, 0.0 | 1.0
1.0, 3.0 | 4.0
The second table has three columns:
1.0 | 2.0 | 3.0
2.0 | 0.0 | 1.0
1.0 | 3.0 | 4.0
Returns:
SegmentedTensors ... | _prepare_tables | python | huggingface/transformers | tests/models/tapas/test_modeling_tf_tapas.py | https://github.com/huggingface/transformers/blob/master/tests/models/tapas/test_modeling_tf_tapas.py | Apache-2.0 |
def get_expected_values(self, image_inputs, batched=False):
"""
This function computes the expected height and width when providing images to TvpImageProcessor,
assuming do_resize is set to True with a scalar size.
"""
if not batched:
return (int(self.pad_size["height... |
This function computes the expected height and width when providing images to TvpImageProcessor,
assuming do_resize is set to True with a scalar size.
| get_expected_values | python | huggingface/transformers | tests/models/tvp/test_image_processing_tvp.py | https://github.com/huggingface/transformers/blob/master/tests/models/tvp/test_image_processing_tvp.py | Apache-2.0 |
def test_batch_encode_dynamic_overflowing(self):
"""
When calling batch_encode with multiple sequences, it can return different number of
overflowing encoding for each sequence:
[
Sequence 1: [Encoding 1, Encoding 2],
Sequence 2: [Encoding 1],
Sequence 3: [E... |
When calling batch_encode with multiple sequences, it can return different number of
overflowing encoding for each sequence:
[
Sequence 1: [Encoding 1, Encoding 2],
Sequence 2: [Encoding 1],
Sequence 3: [Encoding 1, Encoding 2, ... Encoding N]
]
Thi... | test_batch_encode_dynamic_overflowing | python | huggingface/transformers | tests/models/udop/test_tokenization_udop.py | https://github.com/huggingface/transformers/blob/master/tests/models/udop/test_tokenization_udop.py | Apache-2.0 |
def test_mismatching_num_image_tokens(self):
"""
Tests that VLMs through an error with explicit message saying what is wrong
when number of images don't match number of image tokens in the text.
Also we need to test multi-image cases when one prompr has multiple image tokens.
"""... |
Tests that VLMs through an error with explicit message saying what is wrong
when number of images don't match number of image tokens in the text.
Also we need to test multi-image cases when one prompr has multiple image tokens.
| test_mismatching_num_image_tokens | python | huggingface/transformers | tests/models/video_llava/test_modeling_video_llava.py | https://github.com/huggingface/transformers/blob/master/tests/models/video_llava/test_modeling_video_llava.py | Apache-2.0 |
def test_vision_feature_layers(self, vision_feature_layer):
"""
Test that we can use either one vision feature layer, or a list of
vision feature layers.
"""
config, input_dict = self.model_tester.prepare_config_and_inputs_for_common()
config.vision_feature_layer = vision... |
Test that we can use either one vision feature layer, or a list of
vision feature layers.
| test_vision_feature_layers | python | huggingface/transformers | tests/models/video_llava/test_modeling_video_llava.py | https://github.com/huggingface/transformers/blob/master/tests/models/video_llava/test_modeling_video_llava.py | Apache-2.0 |
def get_expected_values(self, image_inputs, batched=False):
"""
This function computes the expected height and width when providing images to ViltImageProcessor,
assuming do_resize is set to True with a scalar size and size_divisor.
"""
if not batched:
size = self.siz... |
This function computes the expected height and width when providing images to ViltImageProcessor,
assuming do_resize is set to True with a scalar size and size_divisor.
| get_expected_values | python | huggingface/transformers | tests/models/vilt/test_image_processing_vilt.py | https://github.com/huggingface/transformers/blob/master/tests/models/vilt/test_image_processing_vilt.py | Apache-2.0 |
def test_mismatching_num_image_tokens(self):
"""
Tests that VLMs through an error with explicit message saying what is wrong
when number of images doesn't match number of image tokens in the text.
Also we need to test multi-image cases when one prompr has multiple image tokens.
"... |
Tests that VLMs through an error with explicit message saying what is wrong
when number of images doesn't match number of image tokens in the text.
Also we need to test multi-image cases when one prompr has multiple image tokens.
| test_mismatching_num_image_tokens | python | huggingface/transformers | tests/models/vipllava/test_modeling_vipllava.py | https://github.com/huggingface/transformers/blob/master/tests/models/vipllava/test_modeling_vipllava.py | Apache-2.0 |
def test_vision_feature_layers(self, vision_feature_layers):
"""
Test that we can use either one vision feature layer, or a list of
vision feature layers.
"""
# NOTE: vipllava uses vision_feature_layers instead of vision_feature_layer as the
# config key. The reason is th... |
Test that we can use either one vision feature layer, or a list of
vision feature layers.
| test_vision_feature_layers | python | huggingface/transformers | tests/models/vipllava/test_modeling_vipllava.py | https://github.com/huggingface/transformers/blob/master/tests/models/vipllava/test_modeling_vipllava.py | Apache-2.0 |
def test_inference_fp16(self):
r"""
A small test to make sure that inference work in half precision without any problem.
"""
model = ViTModel.from_pretrained("facebook/dino-vits8", torch_dtype=torch.float16, device_map="auto")
image_processor = self.default_image_processor
... |
A small test to make sure that inference work in half precision without any problem.
| test_inference_fp16 | python | huggingface/transformers | tests/models/vit/test_modeling_vit.py | https://github.com/huggingface/transformers/blob/master/tests/models/vit/test_modeling_vit.py | Apache-2.0 |
def get_subsampled_output_lengths(self, input_lengths):
"""
Computes the output length of the convolutional layers
"""
for i in range(self.num_conv_layers):
input_lengths = (input_lengths - 1) // 2 + 1
return input_lengths |
Computes the output length of the convolutional layers
| get_subsampled_output_lengths | python | huggingface/transformers | tests/models/whisper/test_modeling_flax_whisper.py | https://github.com/huggingface/transformers/blob/master/tests/models/whisper/test_modeling_flax_whisper.py | Apache-2.0 |
def get_subsampled_output_lengths(self, input_lengths):
"""
Computes the output length of the convolutional layers
"""
for i in range(self.num_conv_layers):
input_lengths = (input_lengths - 1) // 2 + 1
return input_lengths |
Computes the output length of the convolutional layers
| get_subsampled_output_lengths | python | huggingface/transformers | tests/models/whisper/test_modeling_tf_whisper.py | https://github.com/huggingface/transformers/blob/master/tests/models/whisper/test_modeling_tf_whisper.py | Apache-2.0 |
def get_subsampled_output_lengths(self, input_lengths):
"""
Computes the output length of the convolutional layers
"""
for i in range(self.num_conv_layers):
input_lengths = (input_lengths - 1) // 2 + 1
return input_lengths |
Computes the output length of the convolutional layers
| get_subsampled_output_lengths | python | huggingface/transformers | tests/models/whisper/test_modeling_whisper.py | https://github.com/huggingface/transformers/blob/master/tests/models/whisper/test_modeling_whisper.py | Apache-2.0 |
def get_subsampled_output_lengths(self, input_lengths):
"""
Computes the output length of the convolutional layers
"""
for i in range(self.num_conv_layers):
input_lengths = (input_lengths - 1) // 2 + 1
return input_lengths |
Computes the output length of the convolutional layers
| get_subsampled_output_lengths | python | huggingface/transformers | tests/models/whisper/test_modeling_whisper.py | https://github.com/huggingface/transformers/blob/master/tests/models/whisper/test_modeling_whisper.py | Apache-2.0 |
def test_find_longest_common_subsequence_old(self):
"""Test using the old processing functions used in the ASR pipeline, but that serves as a BC reference."""
max_source_positions = 1500
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
previous_sequence = [[51492, 406... | Test using the old processing functions used in the ASR pipeline, but that serves as a BC reference. | test_find_longest_common_subsequence_old | python | huggingface/transformers | tests/models/whisper/test_processor_whisper.py | https://github.com/huggingface/transformers/blob/master/tests/models/whisper/test_processor_whisper.py | Apache-2.0 |
def _fast_find_longest_common_sequence(sequence_left, sequence_right):
"""Old processing function used in the ASR pipeline."""
seq_len_left = len(sequence_left)
seq_len_right = len(sequence_right)
counter = [[0] * (seq_len_right + 1) for _ in range(seq_len_left + 1)]
longest = 0
for i in range(s... | Old processing function used in the ASR pipeline. | _fast_find_longest_common_sequence | python | huggingface/transformers | tests/models/whisper/test_processor_whisper.py | https://github.com/huggingface/transformers/blob/master/tests/models/whisper/test_processor_whisper.py | Apache-2.0 |
def _find_timestamp_sequence(sequences, tokenizer, feature_extractor, max_source_positions):
"""
Old processing function used in the ASR pipeline.
Computes the final sequences by merging the end of the nth sequence with the beginning of the n+1th sequence. Since
`WhisperForConditionalGeneration` produc... |
Old processing function used in the ASR pipeline.
Computes the final sequences by merging the end of the nth sequence with the beginning of the n+1th sequence. Since
`WhisperForConditionalGeneration` produces the timestamps pairwise, we filter the consecutive timestamps and only
iterate over them. We ... | _find_timestamp_sequence | python | huggingface/transformers | tests/models/whisper/test_processor_whisper.py | https://github.com/huggingface/transformers/blob/master/tests/models/whisper/test_processor_whisper.py | Apache-2.0 |
def test_full_tokenizer(self):
"""Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt"""
tokenizer = XLMTokenizer(self.vocab_file, self.merges_file)
text = "lower"
bpe_tokens = ["low", "er</w>"]
tokens = tokenizer.tokenize(text)
self.assertList... | Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt | test_full_tokenizer | python | huggingface/transformers | tests/models/xlm/test_tokenization_xlm.py | https://github.com/huggingface/transformers/blob/master/tests/models/xlm/test_tokenization_xlm.py | Apache-2.0 |
def test_create_position_ids_respects_padding_index(self):
"""This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is XLMRobert... | This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is XLMRobertaXLEmbeddings.padding_idx + 1
| test_create_position_ids_respects_padding_index | python | huggingface/transformers | tests/models/xlm_roberta_xl/test_modeling_xlm_roberta_xl.py | https://github.com/huggingface/transformers/blob/master/tests/models/xlm_roberta_xl/test_modeling_xlm_roberta_xl.py | Apache-2.0 |
def test_create_position_ids_from_inputs_embeds(self):
"""This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is XLMRobertaXLE... | This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is XLMRobertaXLEmbeddings.padding_idx + 1
| test_create_position_ids_from_inputs_embeds | python | huggingface/transformers | tests/models/xlm_roberta_xl/test_modeling_xlm_roberta_xl.py | https://github.com/huggingface/transformers/blob/master/tests/models/xlm_roberta_xl/test_modeling_xlm_roberta_xl.py | Apache-2.0 |
def test_create_position_ids_respects_padding_index(self):
"""This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is XmodEmbed... | This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is XmodEmbeddings.padding_idx + 1
| test_create_position_ids_respects_padding_index | python | huggingface/transformers | tests/models/xmod/test_modeling_xmod.py | https://github.com/huggingface/transformers/blob/master/tests/models/xmod/test_modeling_xmod.py | Apache-2.0 |
def test_create_position_ids_from_inputs_embeds(self):
"""This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is XmodEmbedding... | This is a regression test for https://github.com/huggingface/transformers/issues/1761
The position ids should be masked with the embedding object's padding index. Therefore, the
first available non-padding position index is XmodEmbeddings.padding_idx + 1
| test_create_position_ids_from_inputs_embeds | python | huggingface/transformers | tests/models/xmod/test_modeling_xmod.py | https://github.com/huggingface/transformers/blob/master/tests/models/xmod/test_modeling_xmod.py | Apache-2.0 |
def get_expected_values(self, image_inputs, batched=False):
"""
This function computes the expected height and width when providing images to YolosImageProcessor,
assuming do_resize is set to True with a scalar size.
"""
if not batched:
image = image_inputs[0]
... |
This function computes the expected height and width when providing images to YolosImageProcessor,
assuming do_resize is set to True with a scalar size.
| get_expected_values | python | huggingface/transformers | tests/models/yolos/test_image_processing_yolos.py | https://github.com/huggingface/transformers/blob/master/tests/models/yolos/test_image_processing_yolos.py | Apache-2.0 |
def test_initialization(self):
r"""
Overriding the test_initialization test as the A_log and D params of the Mamba block are initialized differently
"""
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
configs_no_init = _config_zero_init(config)
... |
Overriding the test_initialization test as the A_log and D params of the Mamba block are initialized differently
| test_initialization | python | huggingface/transformers | tests/models/zamba/test_modeling_zamba.py | https://github.com/huggingface/transformers/blob/master/tests/models/zamba/test_modeling_zamba.py | Apache-2.0 |
def test_attention_outputs(self):
r"""
Overriding the test_attention_outputs test as the Zamba model outputs attention only for its attention layers
"""
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
config.return_dict = True
seq_len = get... |
Overriding the test_attention_outputs test as the Zamba model outputs attention only for its attention layers
| test_attention_outputs | python | huggingface/transformers | tests/models/zamba/test_modeling_zamba.py | https://github.com/huggingface/transformers/blob/master/tests/models/zamba/test_modeling_zamba.py | Apache-2.0 |
def test_left_padding_compatibility(self):
r"""
Overriding the test_left_padding_compatibility test as the mamba layers accentuate the numerical differences
effect of the left padding discussed in the issue in the note. Using a more permissive tolerance value.
"""
import inspect
... |
Overriding the test_left_padding_compatibility test as the mamba layers accentuate the numerical differences
effect of the left padding discussed in the issue in the note. Using a more permissive tolerance value.
| test_left_padding_compatibility | python | huggingface/transformers | tests/models/zamba/test_modeling_zamba.py | https://github.com/huggingface/transformers/blob/master/tests/models/zamba/test_modeling_zamba.py | Apache-2.0 |
def test_flash_attn_2_fp32_ln(self):
r"""
Overriding the test_flash_attn_2_fp32_ln test as the Zamba model, like Mixtral, doesn't support
right padding + use cache with FA2
"""
for model_class in self.all_generative_model_classes:
config, inputs_dict = self.model_test... |
Overriding the test_flash_attn_2_fp32_ln test as the Zamba model, like Mixtral, doesn't support
right padding + use cache with FA2
| test_flash_attn_2_fp32_ln | python | huggingface/transformers | tests/models/zamba/test_modeling_zamba.py | https://github.com/huggingface/transformers/blob/master/tests/models/zamba/test_modeling_zamba.py | Apache-2.0 |
def test_past_key_values_format(self):
"""
Overwriting to pass the expected cache shapes (Zamba2 has cache shape = [batch_size, 0] for mamba layers)
"""
config, inputs = self.model_tester.prepare_config_and_inputs_for_common()
batch_size, seq_length = inputs["input_ids"].shape
... |
Overwriting to pass the expected cache shapes (Zamba2 has cache shape = [batch_size, 0] for mamba layers)
| test_past_key_values_format | python | huggingface/transformers | tests/models/zamba2/test_modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/tests/models/zamba2/test_modeling_zamba2.py | Apache-2.0 |
def test_initialization(self):
r"""
Overriding the test_initialization test as the A_log and D params of the Mamba block are initialized differently
"""
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
configs_no_init = _config_zero_init(config)
... |
Overriding the test_initialization test as the A_log and D params of the Mamba block are initialized differently
| test_initialization | python | huggingface/transformers | tests/models/zamba2/test_modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/tests/models/zamba2/test_modeling_zamba2.py | Apache-2.0 |
def test_attention_outputs(self):
r"""
Overriding the test_attention_outputs test as the Zamba2 model outputs attention only for its attention layers
"""
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
config.return_dict = True
seq_len = ge... |
Overriding the test_attention_outputs test as the Zamba2 model outputs attention only for its attention layers
| test_attention_outputs | python | huggingface/transformers | tests/models/zamba2/test_modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/tests/models/zamba2/test_modeling_zamba2.py | Apache-2.0 |
def test_left_padding_compatibility(self):
r"""
Overriding the test_left_padding_compatibility test as the mamba layers accentuate the numerical differences
effect of the left padding discussed in the issue in the note. Using a more permissive tolerance value.
"""
import inspect
... |
Overriding the test_left_padding_compatibility test as the mamba layers accentuate the numerical differences
effect of the left padding discussed in the issue in the note. Using a more permissive tolerance value.
| test_left_padding_compatibility | python | huggingface/transformers | tests/models/zamba2/test_modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/tests/models/zamba2/test_modeling_zamba2.py | Apache-2.0 |
def test_flash_attn_2_fp32_ln(self):
r"""
Overriding the test_flash_attn_2_fp32_ln test as the Zamba2 model, like Mixtral, doesn't support
right padding + use cache with FA2
"""
for model_class in self.all_generative_model_classes:
config, inputs_dict = self.model_tes... |
Overriding the test_flash_attn_2_fp32_ln test as the Zamba2 model, like Mixtral, doesn't support
right padding + use cache with FA2
| test_flash_attn_2_fp32_ln | python | huggingface/transformers | tests/models/zamba2/test_modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/tests/models/zamba2/test_modeling_zamba2.py | Apache-2.0 |
def test_flex_attention_with_grads(self):
"""
Overwriting as the base hidden size is big enough for compile.
Manipulation of dims causes issues due to other constraints not being satisfied anymore.
"""
for model_class in self.all_model_classes:
config, inputs_dict = s... |
Overwriting as the base hidden size is big enough for compile.
Manipulation of dims causes issues due to other constraints not being satisfied anymore.
| test_flex_attention_with_grads | python | huggingface/transformers | tests/models/zamba2/test_modeling_zamba2.py | https://github.com/huggingface/transformers/blob/master/tests/models/zamba2/test_modeling_zamba2.py | Apache-2.0 |
def _check_lora_correctly_converted(self, model):
"""
Utility method to check if the model has correctly adapters injected on it.
"""
from peft.tuners.tuners_utils import BaseTunerLayer
is_peft_loaded = False
for _, m in model.named_modules():
if isinstance(... |
Utility method to check if the model has correctly adapters injected on it.
| _check_lora_correctly_converted | python | huggingface/transformers | tests/peft_integration/test_peft_integration.py | https://github.com/huggingface/transformers/blob/master/tests/peft_integration/test_peft_integration.py | Apache-2.0 |
def test_peft_from_pretrained(self):
"""
Simple test that tests the basic usage of PEFT model through `from_pretrained`.
This checks if we pass a remote folder that contains an adapter config and adapter weights, it
should correctly load a model that has adapters injected on it.
... |
Simple test that tests the basic usage of PEFT model through `from_pretrained`.
This checks if we pass a remote folder that contains an adapter config and adapter weights, it
should correctly load a model that has adapters injected on it.
| test_peft_from_pretrained | python | huggingface/transformers | tests/peft_integration/test_peft_integration.py | https://github.com/huggingface/transformers/blob/master/tests/peft_integration/test_peft_integration.py | Apache-2.0 |
def test_peft_state_dict(self):
"""
Simple test that checks if the returned state dict of `get_adapter_state_dict()` method contains
the expected keys.
"""
for model_id in self.peft_test_model_ids:
for transformers_class in self.transformers_test_model_classes:
... |
Simple test that checks if the returned state dict of `get_adapter_state_dict()` method contains
the expected keys.
| test_peft_state_dict | python | huggingface/transformers | tests/peft_integration/test_peft_integration.py | https://github.com/huggingface/transformers/blob/master/tests/peft_integration/test_peft_integration.py | Apache-2.0 |
def test_peft_save_pretrained(self):
"""
Test that checks various combinations of `save_pretrained` with a model that has adapters loaded
on it. This checks if the saved model contains the expected files (adapter weights and adapter config).
"""
for model_id in self.peft_test_mod... |
Test that checks various combinations of `save_pretrained` with a model that has adapters loaded
on it. This checks if the saved model contains the expected files (adapter weights and adapter config).
| test_peft_save_pretrained | python | huggingface/transformers | tests/peft_integration/test_peft_integration.py | https://github.com/huggingface/transformers/blob/master/tests/peft_integration/test_peft_integration.py | Apache-2.0 |
def test_peft_enable_disable_adapters(self):
"""
A test that checks if `enable_adapters` and `disable_adapters` methods work as expected.
"""
from peft import LoraConfig
dummy_input = torch.LongTensor([[0, 1, 2, 3, 4, 5, 6, 7]]).to(torch_device)
for model_id in self.tra... |
A test that checks if `enable_adapters` and `disable_adapters` methods work as expected.
| test_peft_enable_disable_adapters | python | huggingface/transformers | tests/peft_integration/test_peft_integration.py | https://github.com/huggingface/transformers/blob/master/tests/peft_integration/test_peft_integration.py | Apache-2.0 |
def test_peft_add_adapter(self):
"""
Simple test that tests if `add_adapter` works as expected
"""
from peft import LoraConfig
for model_id in self.transformers_test_model_ids:
for transformers_class in self.transformers_test_model_classes:
model = tr... |
Simple test that tests if `add_adapter` works as expected
| test_peft_add_adapter | python | huggingface/transformers | tests/peft_integration/test_peft_integration.py | https://github.com/huggingface/transformers/blob/master/tests/peft_integration/test_peft_integration.py | Apache-2.0 |
def test_peft_add_adapter_from_pretrained(self):
"""
Simple test that tests if `add_adapter` works as expected
"""
from peft import LoraConfig
for model_id in self.transformers_test_model_ids:
for transformers_class in self.transformers_test_model_classes:
... |
Simple test that tests if `add_adapter` works as expected
| test_peft_add_adapter_from_pretrained | python | huggingface/transformers | tests/peft_integration/test_peft_integration.py | https://github.com/huggingface/transformers/blob/master/tests/peft_integration/test_peft_integration.py | Apache-2.0 |
def test_peft_add_adapter_modules_to_save(self):
"""
Simple test that tests if `add_adapter` works as expected when training with
modules to save.
"""
from peft import LoraConfig
from peft.utils import ModulesToSaveWrapper
for model_id in self.transformers_test_m... |
Simple test that tests if `add_adapter` works as expected when training with
modules to save.
| test_peft_add_adapter_modules_to_save | python | huggingface/transformers | tests/peft_integration/test_peft_integration.py | https://github.com/huggingface/transformers/blob/master/tests/peft_integration/test_peft_integration.py | Apache-2.0 |
def test_peft_add_adapter_training_gradient_checkpointing(self):
"""
Simple test that tests if `add_adapter` works as expected when training with
gradient checkpointing.
"""
from peft import LoraConfig
for model_id in self.transformers_test_model_ids:
for tra... |
Simple test that tests if `add_adapter` works as expected when training with
gradient checkpointing.
| test_peft_add_adapter_training_gradient_checkpointing | python | huggingface/transformers | tests/peft_integration/test_peft_integration.py | https://github.com/huggingface/transformers/blob/master/tests/peft_integration/test_peft_integration.py | Apache-2.0 |
def test_peft_add_multi_adapter(self):
"""
Simple test that tests the basic usage of PEFT model through `from_pretrained`. This test tests if
add_adapter works as expected in multi-adapter setting.
"""
from peft import LoraConfig
from peft.tuners.tuners_utils import BaseT... |
Simple test that tests the basic usage of PEFT model through `from_pretrained`. This test tests if
add_adapter works as expected in multi-adapter setting.
| test_peft_add_multi_adapter | python | huggingface/transformers | tests/peft_integration/test_peft_integration.py | https://github.com/huggingface/transformers/blob/master/tests/peft_integration/test_peft_integration.py | Apache-2.0 |
def test_delete_adapter(self):
"""
Enhanced test for `delete_adapter` to handle multiple adapters,
edge cases, and proper error handling.
"""
from peft import LoraConfig
for model_id in self.transformers_test_model_ids:
for transformers_class in self.transfor... |
Enhanced test for `delete_adapter` to handle multiple adapters,
edge cases, and proper error handling.
| test_delete_adapter | python | huggingface/transformers | tests/peft_integration/test_peft_integration.py | https://github.com/huggingface/transformers/blob/master/tests/peft_integration/test_peft_integration.py | Apache-2.0 |
def test_peft_from_pretrained_kwargs(self):
"""
Simple test that tests the basic usage of PEFT model through `from_pretrained` + additional kwargs
and see if the integraiton behaves as expected.
"""
for model_id in self.peft_test_model_ids:
for transformers_class in s... |
Simple test that tests the basic usage of PEFT model through `from_pretrained` + additional kwargs
and see if the integraiton behaves as expected.
| test_peft_from_pretrained_kwargs | python | huggingface/transformers | tests/peft_integration/test_peft_integration.py | https://github.com/huggingface/transformers/blob/master/tests/peft_integration/test_peft_integration.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.