How to use from
Docker Model Runner
docker model run hf.co/congqi/test-GGUF:Q6_K
Quick Links

No model card

Downloads last month
18
GGUF
Model size
73B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support