How to use from
Docker Model Runner
docker model run hf.co/ngxson/wllama-split-models:
Quick Links
README.md exists but content is empty.
Downloads last month
160
GGUF
Model size
3B params
Architecture
gemma2
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support