Remove model.safetensors - use pytorch_model.bin with INT8 weights 29acac0 verified Omdano commited on Oct 5, 2025
Add back TorchAO INT8 quantization_config for proper loading 1209faf verified Omdano commited on Oct 5, 2025
Remove quantization_config to avoid BitsAndBytes imports c7f451a verified Omdano commited on Oct 5, 2025