Thai Llama Quantized 4bits(llama.cpp - GGML): thai-q4_0.bin
Thai Llama Quantized 4bits(GPTQ): llama7b-4bit-128g.pt

LlaMa-2 with OTG v1.0.0 Quantized 4bits(llama.cpp - GGML): llama-2-otg-ggml-q4_0.bin

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support