code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def test_cpu_accelerator_disk_loading_custom_device_map(self):
r"""
A test to check is dispatching a model on cpu & gpu works correctly using a custom `device_map`.
This time we also add `disk` on the device_map.
"""
device_map = {
"transformer.word_embeddings": 0,
... |
A test to check is dispatching a model on cpu & gpu works correctly using a custom `device_map`.
This time we also add `disk` on the device_map.
| test_cpu_accelerator_disk_loading_custom_device_map | python | huggingface/transformers | tests/quantization/bnb/test_mixed_int8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/bnb/test_mixed_int8.py | Apache-2.0 |
def test_cpu_accelerator_disk_loading_custom_device_map_kwargs(self):
r"""
A test to check is dispatching a model on cpu & gpu works correctly using a custom `device_map`.
This time we also add `disk` on the device_map - using the kwargs directly instead of the quantization config
"""
... |
A test to check is dispatching a model on cpu & gpu works correctly using a custom `device_map`.
This time we also add `disk` on the device_map - using the kwargs directly instead of the quantization config
| test_cpu_accelerator_disk_loading_custom_device_map_kwargs | python | huggingface/transformers | tests/quantization/bnb/test_mixed_int8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/bnb/test_mixed_int8.py | Apache-2.0 |
def test_int8_from_pretrained(self):
r"""
Test whether loading a 8bit model from the Hub works as expected
"""
from bitsandbytes.nn import Int8Params
model_id = "ybelkada/gpt2-xl-8bit"
model = AutoModelForCausalLM.from_pretrained(model_id)
linear = get_some_lin... |
Test whether loading a 8bit model from the Hub works as expected
| test_int8_from_pretrained | python | huggingface/transformers | tests/quantization/bnb/test_mixed_int8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/bnb/test_mixed_int8.py | Apache-2.0 |
def test_int8_from_pretrained(self):
r"""
Test whether loading a 8bit model from the Hub works as expected
"""
from bitsandbytes.nn import Int8Params
model_id = "Jiqing/TinyLlama-1.1B-Chat-v1.0-bnb-8bit"
model = AutoModelForCausalLM.from_pretrained(model_id)
li... |
Test whether loading a 8bit model from the Hub works as expected
| test_int8_from_pretrained | python | huggingface/transformers | tests/quantization/bnb/test_mixed_int8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/bnb/test_mixed_int8.py | Apache-2.0 |
def test_compressed_uncompressed_model_shapes(self):
"""
Verify that the weights of an uncompressed model and its decompressed compressed counterpart match.
Note: Weights for sparsely compressed models may differ due to packing.
"""
def _has_nested_attr(obj, attr_path):
... |
Verify that the weights of an uncompressed model and its decompressed compressed counterpart match.
Note: Weights for sparsely compressed models may differ due to packing.
| test_compressed_uncompressed_model_shapes | python | huggingface/transformers | tests/quantization/compressed_tensors_integration/test_compressed_models.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/compressed_tensors_integration/test_compressed_models.py | Apache-2.0 |
def test_outputs_match(self):
"""
Ensure that the generated outputs match between the uncompressed model
and its decompressed compressed counterpart.
"""
tokenizer = AutoTokenizer.from_pretrained(self.sparse_uncompressed_model)
input_ids = tokenizer(self.prompt, return_te... |
Ensure that the generated outputs match between the uncompressed model
and its decompressed compressed counterpart.
| test_outputs_match | python | huggingface/transformers | tests/quantization/compressed_tensors_integration/test_compressed_models.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/compressed_tensors_integration/test_compressed_models.py | Apache-2.0 |
def test_no_warnings_for_all_models(self):
"""
Confirm that loading any model using compressed tensors does not trigger
warnings about missing or unexpected keys.
"""
for model_stub in self.model_stubs:
with self.subTest(model_stub=model_stub):
with wa... |
Confirm that loading any model using compressed tensors does not trigger
warnings about missing or unexpected keys.
| test_no_warnings_for_all_models | python | huggingface/transformers | tests/quantization/compressed_tensors_integration/test_compressed_models.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/compressed_tensors_integration/test_compressed_models.py | Apache-2.0 |
def test_run_compressed_outputs_match(self):
"""Check that run_compressed=True/False output are the same"""
from transformers import AutoTokenizer
from transformers.utils.quantization_config import CompressedTensorsConfig
quantization_config = CompressedTensorsConfig(run_compressed=Fal... | Check that run_compressed=True/False output are the same | test_run_compressed_outputs_match | python | huggingface/transformers | tests/quantization/compressed_tensors_integration/test_compressed_models.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/compressed_tensors_integration/test_compressed_models.py | Apache-2.0 |
def test_to_dict(self):
"""
Simple test that checks if one uses a config and converts it to a dict, the dict is the same as the config object
"""
quantization_config = EetqConfig()
config_to_dict = quantization_config.to_dict()
for key in config_to_dict:
self... |
Simple test that checks if one uses a config and converts it to a dict, the dict is the same as the config object
| test_to_dict | python | huggingface/transformers | tests/quantization/eetq_integration/test_eetq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/eetq_integration/test_eetq.py | Apache-2.0 |
def test_from_dict(self):
"""
Simple test that checks if one uses a dict and converts it to a config object, the config object is the same as the dict
"""
dict = {"modules_to_not_convert": ["lm_head.weight"], "quant_method": "eetq", "weights": "int8"}
quantization_config = EetqCo... |
Simple test that checks if one uses a dict and converts it to a config object, the config object is the same as the dict
| test_from_dict | python | huggingface/transformers | tests/quantization/eetq_integration/test_eetq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/eetq_integration/test_eetq.py | Apache-2.0 |
def test_quantized_model_conversion(self):
"""
Simple test that checks if the quantized model has been converted properly
"""
from eetq import EetqLinear
from transformers.integrations import replace_with_eetq_linear
model_id = "facebook/opt-350m"
config = AutoC... |
Simple test that checks if the quantized model has been converted properly
| test_quantized_model_conversion | python | huggingface/transformers | tests/quantization/eetq_integration/test_eetq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/eetq_integration/test_eetq.py | Apache-2.0 |
def test_quantized_model(self):
"""
Simple test that checks if the quantized model is working properly
"""
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)
output = self.quantized_model.generate(**input_ids, max_new_tokens=self.max_new_tokens)
... |
Simple test that checks if the quantized model is working properly
| test_quantized_model | python | huggingface/transformers | tests/quantization/eetq_integration/test_eetq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/eetq_integration/test_eetq.py | Apache-2.0 |
def test_save_pretrained(self):
"""
Simple test that checks if the quantized model is working properly after being saved and loaded
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
model = AutoModelForCausalLM... |
Simple test that checks if the quantized model is working properly after being saved and loaded
| test_save_pretrained | python | huggingface/transformers | tests/quantization/eetq_integration/test_eetq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/eetq_integration/test_eetq.py | Apache-2.0 |
def test_quantized_model_multi_gpu(self):
"""
Simple test that checks if the quantized model is working properly with multiple GPUs
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUs
"""
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)
... |
Simple test that checks if the quantized model is working properly with multiple GPUs
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUs
| test_quantized_model_multi_gpu | python | huggingface/transformers | tests/quantization/eetq_integration/test_eetq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/eetq_integration/test_eetq.py | Apache-2.0 |
def test_to_dict(self):
"""
Simple test that checks if one uses a config and converts it to a dict, the dict is the same as the config object
"""
quantization_config = FbgemmFp8Config()
config_to_dict = quantization_config.to_dict()
for key in config_to_dict:
... |
Simple test that checks if one uses a config and converts it to a dict, the dict is the same as the config object
| test_to_dict | python | huggingface/transformers | tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | Apache-2.0 |
def test_from_dict(self):
"""
Simple test that checks if one uses a dict and converts it to a config object, the config object is the same as the dict
"""
dict = {"modules_to_not_convert": ["lm_head.weight"], "quant_method": "fbgemm_fp8"}
quantization_config = FbgemmFp8Config.fro... |
Simple test that checks if one uses a dict and converts it to a config object, the config object is the same as the dict
| test_from_dict | python | huggingface/transformers | tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | Apache-2.0 |
def test_quantized_model_conversion(self):
"""
Simple test that checks if the quantized model has been converted properly
"""
from transformers.integrations import FbgemmFp8Linear, replace_with_fbgemm_fp8_linear
model_id = "facebook/opt-350m"
config = AutoConfig.from_pr... |
Simple test that checks if the quantized model has been converted properly
| test_quantized_model_conversion | python | huggingface/transformers | tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | Apache-2.0 |
def test_quantized_model(self):
"""
Simple test that checks if the quantized model is working properly
"""
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)
output = self.quantized_model.generate(**input_ids, max_new_tokens=self.max_new_tokens)
... |
Simple test that checks if the quantized model is working properly
| test_quantized_model | python | huggingface/transformers | tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | Apache-2.0 |
def test_save_pretrained(self):
"""
Simple test that checks if the quantized model is working properly after being saved and loaded
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
model = AutoModelForCausalLM... |
Simple test that checks if the quantized model is working properly after being saved and loaded
| test_save_pretrained | python | huggingface/transformers | tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | Apache-2.0 |
def test_change_loading_attributes(self):
"""
Simple test that checks if the quantized model is working properly after being saved and loaded
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
quantization_confi... |
Simple test that checks if the quantized model is working properly after being saved and loaded
| test_change_loading_attributes | python | huggingface/transformers | tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | Apache-2.0 |
def test_quantized_model_multi_gpu(self):
"""
Simple test that checks if the quantized model is working properly with multiple GPUs
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUs
"""
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)
... |
Simple test that checks if the quantized model is working properly with multiple GPUs
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUs
| test_quantized_model_multi_gpu | python | huggingface/transformers | tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | Apache-2.0 |
def test_quantized_model_offload(self):
"""
Simple test that checks if the quantized model returns an error when loading with cpu/disk offloaded
"""
quantization_config = FbgemmFp8Config()
with self.assertRaisesRegex(
ValueError, "You are attempting to load an FP8 mo... |
Simple test that checks if the quantized model returns an error when loading with cpu/disk offloaded
| test_quantized_model_offload | python | huggingface/transformers | tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | Apache-2.0 |
def test_save_pretrained_offload(self):
"""
Simple test that checks if the saved quantized model is working properly cpu/disk offload
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
input_ids = self.tokenizer... |
Simple test that checks if the saved quantized model is working properly cpu/disk offload
| test_save_pretrained_offload | python | huggingface/transformers | tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | Apache-2.0 |
def test_save_pretrained_multi_gpu(self):
"""
Simple test that checks if the quantized model is working properly after being saved and loaded
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
model = AutoModelF... |
Simple test that checks if the quantized model is working properly after being saved and loaded
| test_save_pretrained_multi_gpu | python | huggingface/transformers | tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | Apache-2.0 |
def test_linear_with_diff_feature_size_preserves_shape(self):
"""
Test that FbgemmFp8Linear generates the correct shape when in_features != out_features.
"""
from transformers.integrations import FbgemmFp8Linear
with init_empty_weights(include_buffers=True):
linear =... |
Test that FbgemmFp8Linear generates the correct shape when in_features != out_features.
| test_linear_with_diff_feature_size_preserves_shape | python | huggingface/transformers | tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/fbgemm_fp8/test_fbgemm_fp8.py | Apache-2.0 |
def test_to_dict(self):
"""
Simple test that checks if one uses a config and converts it to a dict, the dict is the same as the config object
"""
quantization_config = FineGrainedFP8Config()
config_to_dict = quantization_config.to_dict()
for key in config_to_dict:
... |
Simple test that checks if one uses a config and converts it to a dict, the dict is the same as the config object
| test_to_dict | python | huggingface/transformers | tests/quantization/finegrained_fp8/test_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/finegrained_fp8/test_fp8.py | Apache-2.0 |
def test_from_dict(self):
"""
Simple test that checks if one uses a dict and converts it to a config object, the config object is the same as the dict
"""
dict = {"modules_to_not_convert": ["lm_head.weight"], "quant_method": "fp8"}
quantization_config = FineGrainedFP8Config.from_... |
Simple test that checks if one uses a dict and converts it to a config object, the config object is the same as the dict
| test_from_dict | python | huggingface/transformers | tests/quantization/finegrained_fp8/test_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/finegrained_fp8/test_fp8.py | Apache-2.0 |
def test_quantized_model_conversion(self):
"""
Simple test that checks if the quantized model has been converted properly
"""
from transformers.integrations import FP8Linear, replace_with_fp8_linear
model_id = "facebook/opt-350m"
config = AutoConfig.from_pretrained(mode... |
Simple test that checks if the quantized model has been converted properly
| test_quantized_model_conversion | python | huggingface/transformers | tests/quantization/finegrained_fp8/test_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/finegrained_fp8/test_fp8.py | Apache-2.0 |
def test_quantized_model(self):
"""
Simple test that checks if the quantized model is working properly
"""
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(self.device_map)
output = self.quantized_model.generate(**input_ids, max_new_tokens=self.max_new_tokens,... |
Simple test that checks if the quantized model is working properly
| test_quantized_model | python | huggingface/transformers | tests/quantization/finegrained_fp8/test_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/finegrained_fp8/test_fp8.py | Apache-2.0 |
def test_save_pretrained(self):
"""
Simple test that checks if the quantized model is working properly after being saved and loaded
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
model = AutoModelForCausalLM... |
Simple test that checks if the quantized model is working properly after being saved and loaded
| test_save_pretrained | python | huggingface/transformers | tests/quantization/finegrained_fp8/test_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/finegrained_fp8/test_fp8.py | Apache-2.0 |
def test_weight_and_weight_scale_inv(self):
"""
Simple test that checks if the weight and weight_scale_inv are working properly
"""
weight = self.quantized_model.model.layers[0].self_attn.q_proj.weight
weight_scale_inv = self.quantized_model.model.layers[0].self_attn.q_proj.weigh... |
Simple test that checks if the weight and weight_scale_inv are working properly
| test_weight_and_weight_scale_inv | python | huggingface/transformers | tests/quantization/finegrained_fp8/test_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/finegrained_fp8/test_fp8.py | Apache-2.0 |
def test_block_size(self):
"""
Simple test that checks if the block size is working properly
"""
self.assertEqual(self.quantized_model.config.quantization_config.weight_block_size, (128, 128))
quantization_config = FineGrainedFP8Config(weight_block_size=(32, 32))
quantize... |
Simple test that checks if the block size is working properly
| test_block_size | python | huggingface/transformers | tests/quantization/finegrained_fp8/test_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/finegrained_fp8/test_fp8.py | Apache-2.0 |
def test_quantized_model_multi_accelerator(self):
"""
Simple test that checks if the quantized model is working properly with multiple accelerators
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUs; or set ZE_AFFINITY_MASK=0,1 if you
have more than 2 XPUs.
"""
inp... |
Simple test that checks if the quantized model is working properly with multiple accelerators
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUs; or set ZE_AFFINITY_MASK=0,1 if you
have more than 2 XPUs.
| test_quantized_model_multi_accelerator | python | huggingface/transformers | tests/quantization/finegrained_fp8/test_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/finegrained_fp8/test_fp8.py | Apache-2.0 |
def test_save_pretrained_multi_accelerators(self):
"""
Simple test that checks if the quantized model is working properly after being saved and loaded
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
model = A... |
Simple test that checks if the quantized model is working properly after being saved and loaded
| test_save_pretrained_multi_accelerators | python | huggingface/transformers | tests/quantization/finegrained_fp8/test_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/finegrained_fp8/test_fp8.py | Apache-2.0 |
def test_quantized_model_offload(self):
"""
Simple test that checks if the quantized model returns an error when loading with cpu/disk offloaded
"""
with self.assertRaisesRegex(
ValueError, "You are attempting to load an FP8 model with a device_map that contains a cpu/disk de... |
Simple test that checks if the quantized model returns an error when loading with cpu/disk offloaded
| test_quantized_model_offload | python | huggingface/transformers | tests/quantization/finegrained_fp8/test_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/finegrained_fp8/test_fp8.py | Apache-2.0 |
def test_save_pretrained_offload(self):
"""
Simple test that checks if the saved quantized model is working properly cpu/disk offload
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
input_ids = self.tokenizer... |
Simple test that checks if the saved quantized model is working properly cpu/disk offload
| test_save_pretrained_offload | python | huggingface/transformers | tests/quantization/finegrained_fp8/test_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/finegrained_fp8/test_fp8.py | Apache-2.0 |
def test_linear_with_diff_feature_size_preserves_shape(self):
"""
Test that FP8Linear generates the correct shape when in_features != out_features.
"""
from transformers.integrations import FP8Linear
linear = FP8Linear(128, 256, block_size=(128, 128), device=self.device)
... |
Test that FP8Linear generates the correct shape when in_features != out_features.
| test_linear_with_diff_feature_size_preserves_shape | python | huggingface/transformers | tests/quantization/finegrained_fp8/test_fp8.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/finegrained_fp8/test_fp8.py | Apache-2.0 |
def test_memory_footprint(self):
r"""
A simple test to check if the model conversion has been done correctly by checking on the
memory footprint of the converted model
"""
mem_quantized = self.quantized_model.get_memory_footprint()
self.assertAlmostEqual(self.mem_fp16 /... |
A simple test to check if the model conversion has been done correctly by checking on the
memory footprint of the converted model
| test_memory_footprint | python | huggingface/transformers | tests/quantization/gptq/test_gptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/gptq/test_gptq.py | Apache-2.0 |
def test_device_and_dtype_assignment(self):
r"""
Test whether trying to cast (or assigning a device to) a model after quantization will throw an error.
Checks also if other models are casted correctly.
"""
# This should work
if self.device_map in (None, "cpu"):
... |
Test whether trying to cast (or assigning a device to) a model after quantization will throw an error.
Checks also if other models are casted correctly.
| test_device_and_dtype_assignment | python | huggingface/transformers | tests/quantization/gptq/test_gptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/gptq/test_gptq.py | Apache-2.0 |
def test_original_dtype(self):
r"""
A simple test to check if the model successfully stores the original dtype
"""
self.assertTrue(hasattr(self.quantized_model.config, "_pre_quantization_dtype"))
self.assertFalse(hasattr(self.model_fp16.config, "_pre_quantization_dtype"))
... |
A simple test to check if the model successfully stores the original dtype
| test_original_dtype | python | huggingface/transformers | tests/quantization/gptq/test_gptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/gptq/test_gptq.py | Apache-2.0 |
def test_quantized_layers_class(self):
"""
Simple test to check if the model conversion has been done correctly by checking on
the class type of the linear layers of the converted models
"""
if is_gptqmodel_available():
from gptqmodel.utils.importer import hf_select_q... |
Simple test to check if the model conversion has been done correctly by checking on
the class type of the linear layers of the converted models
| test_quantized_layers_class | python | huggingface/transformers | tests/quantization/gptq/test_gptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/gptq/test_gptq.py | Apache-2.0 |
def check_inference_correctness(self, model):
r"""
Test the generation quality of the quantized model and see that we are matching the expected output.
Given that we are operating on small numbers + the testing model is relatively small, we might not get
the same output across GPUs. So w... |
Test the generation quality of the quantized model and see that we are matching the expected output.
Given that we are operating on small numbers + the testing model is relatively small, we might not get
the same output across GPUs. So we'll generate few tokens (5-10) and check their output.
... | check_inference_correctness | python | huggingface/transformers | tests/quantization/gptq/test_gptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/gptq/test_gptq.py | Apache-2.0 |
def test_generate_quality(self):
"""
Simple test to check the quality of the model by comparing the generated tokens with the expected tokens
"""
if self.device_map is None:
self.check_inference_correctness(self.quantized_model.to(0))
else:
if self.device_... |
Simple test to check the quality of the model by comparing the generated tokens with the expected tokens
| test_generate_quality | python | huggingface/transformers | tests/quantization/gptq/test_gptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/gptq/test_gptq.py | Apache-2.0 |
def test_serialization(self):
"""
Test the serialization of the model and the loading of the quantized weights works
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
if is_auto_gptq_available() and not is_gptqm... |
Test the serialization of the model and the loading of the quantized weights works
| test_serialization | python | huggingface/transformers | tests/quantization/gptq/test_gptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/gptq/test_gptq.py | Apache-2.0 |
def test_serialization_big_model_inference(self):
"""
Test the serialization of the model and the loading of the quantized weights with big model inference
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
devic... |
Test the serialization of the model and the loading of the quantized weights with big model inference
| test_serialization_big_model_inference | python | huggingface/transformers | tests/quantization/gptq/test_gptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/gptq/test_gptq.py | Apache-2.0 |
def test_change_loading_attributes(self):
"""
Test the serialization of the model and the loading of the quantized weights works with another config file
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
if is_a... |
Test the serialization of the model and the loading of the quantized weights works with another config file
| test_change_loading_attributes | python | huggingface/transformers | tests/quantization/gptq/test_gptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/gptq/test_gptq.py | Apache-2.0 |
def check_inference_correctness(self, model):
"""
Test the generation quality of the quantized model and see that we are matching the expected output.
Given that we are operating on small numbers + the testing model is relatively small, we might not get
the same output across GPUs. So we... |
Test the generation quality of the quantized model and see that we are matching the expected output.
Given that we are operating on small numbers + the testing model is relatively small, we might not get
the same output across GPUs. So we'll generate few tokens (5-10) and check their output.
... | check_inference_correctness | python | huggingface/transformers | tests/quantization/gptq/test_gptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/gptq/test_gptq.py | Apache-2.0 |
def test_max_input_length(self):
"""
Test if the max_input_length works. It modifies the maximum input length that of the model that runs with exllama backend.
"""
prompt = "I am in Paris and" * 1000
inp = self.tokenizer(prompt, return_tensors="pt").to(0)
self.assertTrue... |
Test if the max_input_length works. It modifies the maximum input length that of the model that runs with exllama backend.
| test_max_input_length | python | huggingface/transformers | tests/quantization/gptq/test_gptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/gptq/test_gptq.py | Apache-2.0 |
def check_inference_correctness(self, model):
"""
Test the generation quality of the quantized model and see that we are matching the expected output.
Given that we are operating on small numbers + the testing model is relatively small, we might not get
the same output across GPUs. So we... |
Test the generation quality of the quantized model and see that we are matching the expected output.
Given that we are operating on small numbers + the testing model is relatively small, we might not get
the same output across GPUs. So we'll generate few tokens (5-10) and check their output.
... | check_inference_correctness | python | huggingface/transformers | tests/quantization/gptq/test_gptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/gptq/test_gptq.py | Apache-2.0 |
def test_to_dict(self):
"""
Simple test that checks if one uses a config and converts it to a dict, the dict is the same as the config object
"""
quantization_config = HiggsConfig()
config_to_dict = quantization_config.to_dict()
for key in config_to_dict:
sel... |
Simple test that checks if one uses a config and converts it to a dict, the dict is the same as the config object
| test_to_dict | python | huggingface/transformers | tests/quantization/higgs/test_higgs.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/higgs/test_higgs.py | Apache-2.0 |
def test_from_dict(self):
"""
Simple test that checks if one uses a dict and converts it to a config object, the config object is the same as the dict
"""
dict = {"modules_to_not_convert": ["embed_tokens", "lm_head"], "quant_method": "higgs"}
quantization_config = HiggsConfig.fro... |
Simple test that checks if one uses a dict and converts it to a config object, the config object is the same as the dict
| test_from_dict | python | huggingface/transformers | tests/quantization/higgs/test_higgs.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/higgs/test_higgs.py | Apache-2.0 |
def test_quantized_model_conversion(self):
"""
Simple test that checks if the quantized model has been converted properly
"""
from transformers.integrations import HiggsLinear, replace_with_higgs_linear
model_id = "facebook/opt-350m"
config = AutoConfig.from_pretrained(... |
Simple test that checks if the quantized model has been converted properly
| test_quantized_model_conversion | python | huggingface/transformers | tests/quantization/higgs/test_higgs.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/higgs/test_higgs.py | Apache-2.0 |
def test_quantized_model(self):
"""
Simple test that checks if the quantized model is working properly
"""
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)
output = self.quantized_model.generate(**input_ids, max_new_tokens=self.max_new_tokens)
... |
Simple test that checks if the quantized model is working properly
| test_quantized_model | python | huggingface/transformers | tests/quantization/higgs/test_higgs.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/higgs/test_higgs.py | Apache-2.0 |
def test_save_pretrained(self):
"""
Simple test that checks if the quantized model is working properly after being saved and loaded
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
model = AutoModelForCausalLM... |
Simple test that checks if the quantized model is working properly after being saved and loaded
| test_save_pretrained | python | huggingface/transformers | tests/quantization/higgs/test_higgs.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/higgs/test_higgs.py | Apache-2.0 |
def test_quantized_model_multi_gpu(self):
"""
Simple test that checks if the quantized model is working properly with multiple GPUs
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUs
"""
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)
... |
Simple test that checks if the quantized model is working properly with multiple GPUs
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 GPUs
| test_quantized_model_multi_gpu | python | huggingface/transformers | tests/quantization/higgs/test_higgs.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/higgs/test_higgs.py | Apache-2.0 |
def test_save_pretrained_multi_gpu(self):
"""
Simple test that checks if the quantized model is working properly after being saved and loaded
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
model = AutoModelF... |
Simple test that checks if the quantized model is working properly after being saved and loaded
| test_save_pretrained_multi_gpu | python | huggingface/transformers | tests/quantization/higgs/test_higgs.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/higgs/test_higgs.py | Apache-2.0 |
def test_dequantize(self):
"""
Test the ability to dequantize a model
"""
self.quantized_model.dequantize()
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)
output = self.quantized_model.generate(**input_ids, max_new_tokens=self.max_new_... |
Test the ability to dequantize a model
| test_dequantize | python | huggingface/transformers | tests/quantization/higgs/test_higgs.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/higgs/test_higgs.py | Apache-2.0 |
def test_to_dict(self):
"""
Makes sure the config format is properly set
"""
quantization_config = HqqConfig()
hqq_orig_config = quantization_config.to_dict()
self.assertEqual(quantization_config.quant_config, hqq_orig_config["quant_config"]) |
Makes sure the config format is properly set
| test_to_dict | python | huggingface/transformers | tests/quantization/hqq/test_hqq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/hqq/test_hqq.py | Apache-2.0 |
def test_fp16_quantized_model_multipgpu(self):
"""
Simple LLM model testing fp16 with multi-gpu
"""
quant_config = HqqConfig(nbits=8, group_size=64)
hqq_runner = HQQLLMRunner(
model_id=MODEL_ID, quant_config=quant_config, compute_dtype=torch.float16, device="auto"
... |
Simple LLM model testing fp16 with multi-gpu
| test_fp16_quantized_model_multipgpu | python | huggingface/transformers | tests/quantization/hqq/test_hqq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/hqq/test_hqq.py | Apache-2.0 |
def test_fp16_quantized_model(self):
"""
Simple LLM model testing fp16 with bias
"""
quant_config = HqqConfig(nbits=8, group_size=64)
hqq_runner = HQQLLMRunner(
model_id="facebook/opt-125m", quant_config=quant_config, compute_dtype=torch.float16, device=torch_device
... |
Simple LLM model testing fp16 with bias
| test_fp16_quantized_model | python | huggingface/transformers | tests/quantization/hqq/test_hqq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/hqq/test_hqq.py | Apache-2.0 |
def test_save_and_load_quantized_model(self):
"""
Test saving and loading a quantized model with bias
"""
import tempfile
quant_config = HqqConfig(nbits=8, group_size=64)
hqq_runner = HQQLLMRunner(
model_id="facebook/opt-125m", quant_config=quant_config, com... |
Test saving and loading a quantized model with bias
| test_save_and_load_quantized_model | python | huggingface/transformers | tests/quantization/hqq/test_hqq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/hqq/test_hqq.py | Apache-2.0 |
def test_model_serialization(self):
"""
Simple HQQ LLM save/load test
"""
quant_config = HqqConfig(nbits=4, group_size=64)
hqq_runner = HQQLLMRunner(
model_id=MODEL_ID, quant_config=quant_config, compute_dtype=torch.float16, device=torch_device
)
inp... |
Simple HQQ LLM save/load test
| test_model_serialization | python | huggingface/transformers | tests/quantization/hqq/test_hqq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/hqq/test_hqq.py | Apache-2.0 |
def test_model_serialization_dynamic_quant_with_skip(self):
"""
Simple HQQ LLM save/load test with dynamic quant
"""
q4_config = {"nbits": 4, "group_size": 64}
q3_config = {"nbits": 3, "group_size": 64}
quant_config = HqqConfig(
dynamic_config={
... |
Simple HQQ LLM save/load test with dynamic quant
| test_model_serialization_dynamic_quant_with_skip | python | huggingface/transformers | tests/quantization/hqq/test_hqq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/hqq/test_hqq.py | Apache-2.0 |
def test_weight_only_quantization_conversion(self):
"""
Simple test that checks if the quantized model has been converted properly when using weight only quantization
"""
# Try with weight only quantization
quantization_config = QuantoConfig(weights="int8", activations=None)
... |
Simple test that checks if the quantized model has been converted properly when using weight only quantization
| test_weight_only_quantization_conversion | python | huggingface/transformers | tests/quantization/quanto_integration/test_quanto.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/quanto_integration/test_quanto.py | Apache-2.0 |
def test_weight_and_activation_quantization_conversion(self):
"""
Simple test that checks if the quantized model has been converted properly when using weight + activation quantization
"""
# Try with weight + activation quantization
quantization_config = QuantoConfig(weights="in... |
Simple test that checks if the quantized model has been converted properly when using weight + activation quantization
| test_weight_and_activation_quantization_conversion | python | huggingface/transformers | tests/quantization/quanto_integration/test_quanto.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/quanto_integration/test_quanto.py | Apache-2.0 |
def test_conversion_with_modules_to_not_convert(self):
"""
Simple test that checks if the quantized model has been converted properly when specifying modules_to_not_convert argument
"""
# Try with weight + activatioin quantization
quantization_config = QuantoConfig(weights="int8... |
Simple test that checks if the quantized model has been converted properly when specifying modules_to_not_convert argument
| test_conversion_with_modules_to_not_convert | python | huggingface/transformers | tests/quantization/quanto_integration/test_quanto.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/quanto_integration/test_quanto.py | Apache-2.0 |
def check_inference_correctness(self, model, device):
r"""
Test the generation quality of the quantized model and see that we are matching the expected output.
Given that we are operating on small numbers + the testing model is relatively small, we might not get
the same output across GP... |
Test the generation quality of the quantized model and see that we are matching the expected output.
Given that we are operating on small numbers + the testing model is relatively small, we might not get
the same output across GPUs. So we'll generate few tokens (5-10) and check their output.
... | check_inference_correctness | python | huggingface/transformers | tests/quantization/quanto_integration/test_quanto.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/quanto_integration/test_quanto.py | Apache-2.0 |
def test_serialization_bin(self):
"""
Test the serialization, the loading and the inference of the quantized weights
"""
with tempfile.TemporaryDirectory() as tmpdirname:
with self.assertRaises(ValueError) as e:
self.quantized_model.save_pretrained(tmpdirname,... |
Test the serialization, the loading and the inference of the quantized weights
| test_serialization_bin | python | huggingface/transformers | tests/quantization/quanto_integration/test_quanto.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/quanto_integration/test_quanto.py | Apache-2.0 |
def test_serialization_safetensors(self):
"""
Test the serialization, the loading and the inference of the quantized weights
"""
with tempfile.TemporaryDirectory() as tmpdirname:
with self.assertRaises(ValueError) as e:
self.quantized_model.save_pretrained(tmp... |
Test the serialization, the loading and the inference of the quantized weights
| test_serialization_safetensors | python | huggingface/transformers | tests/quantization/quanto_integration/test_quanto.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/quanto_integration/test_quanto.py | Apache-2.0 |
def test_check_offload_quantized(self):
"""
We check that we have unquantized value in the cpu and in the disk
"""
from optimum.quanto import QBitsTensor, QTensor
cpu_weights = self.quantized_model.transformer.h[22].self_attention.query_key_value._hf_hook.weights_map[
... |
We check that we have unquantized value in the cpu and in the disk
| test_check_offload_quantized | python | huggingface/transformers | tests/quantization/quanto_integration/test_quanto.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/quanto_integration/test_quanto.py | Apache-2.0 |
def test_device_and_dtype_assignment(self):
r"""
Test whether trying to cast (or assigning a device to) a model after quantization will throw an error.
Checks also if other models are casted correctly .
"""
# This should work
if self.device_map is None:
_ = se... |
Test whether trying to cast (or assigning a device to) a model after quantization will throw an error.
Checks also if other models are casted correctly .
| test_device_and_dtype_assignment | python | huggingface/transformers | tests/quantization/quark_integration/test_quark.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/quark_integration/test_quark.py | Apache-2.0 |
def test_original_dtype(self):
r"""
A simple test to check if the model successfully stores the original dtype
"""
self.assertTrue(hasattr(self.quantized_model.config, "_pre_quantization_dtype"))
self.assertFalse(hasattr(self.model_fp16.config, "_pre_quantization_dtype"))
... |
A simple test to check if the model successfully stores the original dtype
| test_original_dtype | python | huggingface/transformers | tests/quantization/quark_integration/test_quark.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/quark_integration/test_quark.py | Apache-2.0 |
def check_inference_correctness(self, model):
r"""
Test the generation quality of the quantized model and see that we are matching the expected output.
Given that we are operating on small numbers + the testing model is relatively small, we might not get
the same output across GPUs. So w... |
Test the generation quality of the quantized model and see that we are matching the expected output.
Given that we are operating on small numbers + the testing model is relatively small, we might not get
the same output across GPUs. So we'll generate few tokens (5-10) and check their output.
... | check_inference_correctness | python | huggingface/transformers | tests/quantization/quark_integration/test_quark.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/quark_integration/test_quark.py | Apache-2.0 |
def test_generate_quality(self):
"""
Simple test to check the quality of the model by comparing the generated tokens with the expected tokens
"""
if self.device_map is None:
self.check_inference_correctness(self.quantized_model.to(0))
else:
self.check_infe... |
Simple test to check the quality of the model by comparing the generated tokens with the expected tokens
| test_generate_quality | python | huggingface/transformers | tests/quantization/quark_integration/test_quark.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/quark_integration/test_quark.py | Apache-2.0 |
def test_to_dict(self):
"""
Simple test that checks if one uses a config and converts it to a dict, the dict is the same as the config object
"""
quantization_config = SpQRConfig()
config_to_dict = quantization_config.to_dict()
for key in config_to_dict:
self... |
Simple test that checks if one uses a config and converts it to a dict, the dict is the same as the config object
| test_to_dict | python | huggingface/transformers | tests/quantization/spqr_integration/test_spqr.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/spqr_integration/test_spqr.py | Apache-2.0 |
def test_from_dict(self):
"""
Simple test that checks if one uses a dict and converts it to a config object, the config object is the same as the dict
"""
dict = {
"beta1": 16,
"beta2": 16,
"bits": 3,
"modules_to_not_convert": ["lm_head.wei... |
Simple test that checks if one uses a dict and converts it to a config object, the config object is the same as the dict
| test_from_dict | python | huggingface/transformers | tests/quantization/spqr_integration/test_spqr.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/spqr_integration/test_spqr.py | Apache-2.0 |
def test_quantized_model_conversion(self):
"""
Simple test that checks if the quantized model has been converted properly
"""
from spqr_quant import QuantizedLinear
from transformers.integrations import replace_with_spqr_linear
model_id = "meta-llama/Llama-2-7b-hf"
... |
Simple test that checks if the quantized model has been converted properly
| test_quantized_model_conversion | python | huggingface/transformers | tests/quantization/spqr_integration/test_spqr.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/spqr_integration/test_spqr.py | Apache-2.0 |
def test_quantized_model(self):
"""
Simple test that checks if the quantized model is working properly
"""
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)
output = self.quantized_model.generate(**input_ids, max_new_tokens=self.max_new_tokens)
... |
Simple test that checks if the quantized model is working properly
| test_quantized_model | python | huggingface/transformers | tests/quantization/spqr_integration/test_spqr.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/spqr_integration/test_spqr.py | Apache-2.0 |
def test_save_pretrained(self):
"""
Simple test that checks if the quantized model is working properly after being saved and loaded
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
model = AutoModelForCausalLM.... |
Simple test that checks if the quantized model is working properly after being saved and loaded
| test_save_pretrained | python | huggingface/transformers | tests/quantization/spqr_integration/test_spqr.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/spqr_integration/test_spqr.py | Apache-2.0 |
def test_quantized_model_multi_gpu(self):
"""
Simple test that checks if the quantized model is working properly with multiple GPUs
"""
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)
quantized_model = AutoModelForCausalLM.from_pretrained(self.m... |
Simple test that checks if the quantized model is working properly with multiple GPUs
| test_quantized_model_multi_gpu | python | huggingface/transformers | tests/quantization/spqr_integration/test_spqr.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/spqr_integration/test_spqr.py | Apache-2.0 |
def test_quantized_model_compile(self):
"""
Simple test that checks if the quantized model is working properly
"""
# Sample tokens greedily
def decode_one_tokens(model, cur_token, input_pos, cache_position, past_key_values):
logits = model(
cur_token,... |
Simple test that checks if the quantized model is working properly
| test_quantized_model_compile | python | huggingface/transformers | tests/quantization/spqr_integration/test_spqr.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/spqr_integration/test_spqr.py | Apache-2.0 |
def test_to_dict(self):
"""
Makes sure the config format is properly set
"""
quantization_config = TorchAoConfig("int4_weight_only")
torchao_orig_config = quantization_config.to_dict()
for key in torchao_orig_config:
self.assertEqual(getattr(quantization_conf... |
Makes sure the config format is properly set
| test_to_dict | python | huggingface/transformers | tests/quantization/torchao_integration/test_torchao.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/torchao_integration/test_torchao.py | Apache-2.0 |
def test_json_serializable(self):
"""
Check that the config dict can be JSON serialized.
"""
quantization_config = TorchAoConfig("int4_weight_only", group_size=32, layout=TensorCoreTiledLayout())
d = quantization_config.to_dict()
self.assertIsInstance(d["quant_type_kwargs... |
Check that the config dict can be JSON serialized.
| test_json_serializable | python | huggingface/transformers | tests/quantization/torchao_integration/test_torchao.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/torchao_integration/test_torchao.py | Apache-2.0 |
def test_int4wo_quant(self):
"""
Simple LLM model testing int4 weight only quantization
"""
quant_config = TorchAoConfig("int4_weight_only", **self.quant_scheme_kwargs)
# Note: we quantize the bfloat16 model on the fly to int4
quantized_model = AutoModelForCausalLM.from_... |
Simple LLM model testing int4 weight only quantization
| test_int4wo_quant | python | huggingface/transformers | tests/quantization/torchao_integration/test_torchao.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/torchao_integration/test_torchao.py | Apache-2.0 |
def test_int4wo_quant_bfloat16_conversion(self):
"""
Testing the dtype of model will be modified to be bfloat16 for int4 weight only quantization
"""
quant_config = TorchAoConfig("int4_weight_only", **self.quant_scheme_kwargs)
# Note: we quantize the bfloat16 model on the fly to... |
Testing the dtype of model will be modified to be bfloat16 for int4 weight only quantization
| test_int4wo_quant_bfloat16_conversion | python | huggingface/transformers | tests/quantization/torchao_integration/test_torchao.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/torchao_integration/test_torchao.py | Apache-2.0 |
def test_int4wo_offload(self):
"""
Simple test that checks if the quantized model int4 weight only is working properly with cpu/disk offload
"""
device_map_offload = {
"model.embed_tokens": 0,
"model.layers.0": 0,
"model.layers.1": 0,
"mod... |
Simple test that checks if the quantized model int4 weight only is working properly with cpu/disk offload
| test_int4wo_offload | python | huggingface/transformers | tests/quantization/torchao_integration/test_torchao.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/torchao_integration/test_torchao.py | Apache-2.0 |
def test_int4wo_quant_multi_accelerator(self):
"""
Simple test that checks if the quantized model int4 weight only is working properly with multiple accelerators
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 CUDA GPUs
set ZE_AFFINITY_MASK=0,1 if you have more than 2 Intel XPUs
... |
Simple test that checks if the quantized model int4 weight only is working properly with multiple accelerators
set CUDA_VISIBLE_DEVICES=0,1 if you have more than 2 CUDA GPUs
set ZE_AFFINITY_MASK=0,1 if you have more than 2 Intel XPUs
| test_int4wo_quant_multi_accelerator | python | huggingface/transformers | tests/quantization/torchao_integration/test_torchao.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/torchao_integration/test_torchao.py | Apache-2.0 |
def check_serialization_expected_output(self, device, expected_output):
"""
Test if we can serialize and load/infer the model again on the same device
"""
torch_dtype = torch.bfloat16 if self.quant_scheme == "int4_weight_only" else "auto"
with tempfile.TemporaryDirectory() as tmp... |
Test if we can serialize and load/infer the model again on the same device
| check_serialization_expected_output | python | huggingface/transformers | tests/quantization/torchao_integration/test_torchao.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/torchao_integration/test_torchao.py | Apache-2.0 |
def test_to_dict(self):
"""
Makes sure the config format is properly set
"""
quantization_config = VptqConfig()
vptq_orig_config = quantization_config.to_dict()
self.assertEqual(vptq_orig_config["quant_method"], quantization_config.quant_method) |
Makes sure the config format is properly set
| test_to_dict | python | huggingface/transformers | tests/quantization/vptq_integration/test_vptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/vptq_integration/test_vptq.py | Apache-2.0 |
def test_quantized_model(self):
"""
Simple test that checks if the quantized model is working properly
"""
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)
output = self.quantized_model.generate(**input_ids, max_new_tokens=self.max_new_tokens, do... |
Simple test that checks if the quantized model is working properly
| test_quantized_model | python | huggingface/transformers | tests/quantization/vptq_integration/test_vptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/vptq_integration/test_vptq.py | Apache-2.0 |
def test_save_pretrained(self):
"""
Simple test that checks if the quantized model is working properly after being saved and loaded
"""
with tempfile.TemporaryDirectory() as tmpdirname:
self.quantized_model.save_pretrained(tmpdirname)
model = AutoModelForCausalLM.... |
Simple test that checks if the quantized model is working properly after being saved and loaded
| test_save_pretrained | python | huggingface/transformers | tests/quantization/vptq_integration/test_vptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/vptq_integration/test_vptq.py | Apache-2.0 |
def test_quantized_model_multi_gpu(self):
"""
Simple test that checks if the quantized model is working properly with multiple GPUs
"""
input_ids = self.tokenizer(self.input_text, return_tensors="pt").to(torch_device)
quantized_model = AutoModelForCausalLM.from_pretrained(self.m... |
Simple test that checks if the quantized model is working properly with multiple GPUs
| test_quantized_model_multi_gpu | python | huggingface/transformers | tests/quantization/vptq_integration/test_vptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/vptq_integration/test_vptq.py | Apache-2.0 |
def test_quantized_model_conversion(self):
"""
Simple test that checks if the quantized model has been converted properly
"""
from vptq import VQuantLinear
from transformers.integrations import replace_with_vptq_linear
model_id = "facebook/opt-350m"
config = Aut... |
Simple test that checks if the quantized model has been converted properly
| test_quantized_model_conversion | python | huggingface/transformers | tests/quantization/vptq_integration/test_vptq.py | https://github.com/huggingface/transformers/blob/master/tests/quantization/vptq_integration/test_vptq.py | Apache-2.0 |
def create_tmp_repo(tmp_dir):
"""
Creates a mock repository in a temporary folder for testing.
"""
tmp_dir = Path(tmp_dir)
if tmp_dir.exists():
shutil.rmtree(tmp_dir)
tmp_dir.mkdir(exist_ok=True)
model_dir = tmp_dir / "src" / "transformers" / "models"
model_dir.mkdir(parents=Tru... |
Creates a mock repository in a temporary folder for testing.
| create_tmp_repo | python | huggingface/transformers | tests/repo_utils/test_check_copies.py | https://github.com/huggingface/transformers/blob/master/tests/repo_utils/test_check_copies.py | Apache-2.0 |
def patch_transformer_repo_path(new_folder):
"""
Temporarily patches the variables defines in `check_copies` to use a different location for the repo.
"""
old_repo_path = check_copies.REPO_PATH
old_doc_path = check_copies.PATH_TO_DOCS
old_transformer_path = check_copies.TRANSFORMERS_PATH
rep... |
Temporarily patches the variables defines in `check_copies` to use a different location for the repo.
| patch_transformer_repo_path | python | huggingface/transformers | tests/repo_utils/test_check_copies.py | https://github.com/huggingface/transformers/blob/master/tests/repo_utils/test_check_copies.py | Apache-2.0 |
def create_tmp_repo(tmp_dir, models=None):
"""
Creates a repository in a temporary directory mimicking the structure of Transformers. Uses the list of models
provided (which defaults to just `["bert"]`).
"""
tmp_dir = Path(tmp_dir)
if tmp_dir.exists():
shutil.rmtree(tmp_dir)
tmp_dir.... |
Creates a repository in a temporary directory mimicking the structure of Transformers. Uses the list of models
provided (which defaults to just `["bert"]`).
| create_tmp_repo | python | huggingface/transformers | tests/repo_utils/test_tests_fetcher.py | https://github.com/huggingface/transformers/blob/master/tests/repo_utils/test_tests_fetcher.py | Apache-2.0 |
def patch_transformer_repo_path(new_folder):
"""
Temporarily patches the variables defines in `tests_fetcher` to use a different location for the repo.
"""
old_repo_path = tests_fetcher.PATH_TO_REPO
tests_fetcher.PATH_TO_REPO = Path(new_folder).resolve()
tests_fetcher.PATH_TO_EXAMPLES = tests_fe... |
Temporarily patches the variables defines in `tests_fetcher` to use a different location for the repo.
| patch_transformer_repo_path | python | huggingface/transformers | tests/repo_utils/test_tests_fetcher.py | https://github.com/huggingface/transformers/blob/master/tests/repo_utils/test_tests_fetcher.py | Apache-2.0 |
def torchrun(self, script: str, is_torchrun: bool = True):
"""Run the `script` using `torchrun` command for multi-processing in a subprocess. Captures errors as necessary."""
with tempfile.NamedTemporaryFile(mode="w+", suffix=".py") as tmp:
tmp.write(script)
tmp.flush()
... | Run the `script` using `torchrun` command for multi-processing in a subprocess. Captures errors as necessary. | torchrun | python | huggingface/transformers | tests/tensor_parallel/test_tensor_parallel.py | https://github.com/huggingface/transformers/blob/master/tests/tensor_parallel/test_tensor_parallel.py | Apache-2.0 |
def test_probability_sum_error(self):
"""Test that the sum of mask_replace_prob and random_replace_prob exceeding 1 raises an error."""
tokenizer = BertTokenizer(self.vocab_file)
with self.assertRaises(ValueError):
DataCollatorForLanguageModeling(tokenizer=tokenizer, mask_replace_pro... | Test that the sum of mask_replace_prob and random_replace_prob exceeding 1 raises an error. | test_probability_sum_error | python | huggingface/transformers | tests/trainer/test_data_collator.py | https://github.com/huggingface/transformers/blob/master/tests/trainer/test_data_collator.py | Apache-2.0 |
def test_load_backbone_from_config(self):
"""
Test that load_backbone correctly loads a backbone from a backbone config.
"""
config = MaskFormerConfig(backbone_config=ResNetConfig(out_indices=(0, 2)))
backbone = load_backbone(config)
self.assertEqual(backbone.out_features... |
Test that load_backbone correctly loads a backbone from a backbone config.
| test_load_backbone_from_config | python | huggingface/transformers | tests/utils/test_backbone_utils.py | https://github.com/huggingface/transformers/blob/master/tests/utils/test_backbone_utils.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.