QuantFactory/Qwen2.5-7B-Instruct-kowiki-qa-GGUF
This is quantized version of beomi/Qwen2.5-7B-Instruct-kowiki-qa created using llama.cpp
Original Model Card
- Downloads last month
- 6
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support