GGUF
How to use from
Docker Model Runner
docker model run hf.co/datastreams/ds-4bit:Q4_0
Quick Links
README.md exists but content is empty.
Downloads last month
5
GGUF
Model size
13B params
Architecture
gptneox
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support