ZeroGPU-LLM-Inference / quantize_to_awq_colab.ipynb
Alikestocode's picture
Fix AWQModifier: use quantization_config with num_bits
022b2da
raw
history blame
27.9 kB
Open in Colab
Rendering notebook...