GGUF
How to use from
Docker Model Runner
docker model run hf.co/aless2212/CodeLlama-7b-Python-GGUF:
Quick Links
README.md exists but content is empty.
Downloads last month
-
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support