q4f16_1 converted model(4bit Quantized) from Llama-2-ko-7b-Chat
This repository includes 4bit quantized model with MLC-LLM, with the weights from kfkas/Llama-2-ko-7b-Chat.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support