Quantizations of https://huggingface.co/mlabonne/EvolCodeLlama-7b
Inference Clients/UIs
From original readme
This is a codellama/CodeLlama-7b-hf model fine-tuned using QLoRA (4-bit precision) on the mlabonne/Evol-Instruct-Python-1k.
- Downloads last month
- 10
Hardware compatibility
Log In
to view the estimation
1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit