IQuest-Coder-V1-7B-Instruct-GGUF
GGUF quant collection for IQuestLab/IQuest-Coder-V1-7B-Instruct.
Included quants
IQuest-Coder-V1-7B-Instruct-Q4_K_M.ggufIQuest-Coder-V1-7B-Instruct-Q6_K.ggufIQuest-Coder-V1-7B-Instruct-Q8_0.gguf
Checksums
db34a5f95f4f6051c3ee6595f764c4e13ba0c1e59ad29879cec57fae5d446c1fIQuest-Coder-V1-7B-Instruct-Q4_K_M.gguf3335a2de23107c42aee256b38be1e82a9a00027935f7eb972a0ab5f5bf1072d7IQuest-Coder-V1-7B-Instruct-Q6_K.gguf145e39007993fdb8fff9ca6a9c4a6ac53ce208f8549b783ffdf20ade178c8c5bIQuest-Coder-V1-7B-Instruct-Q8_0.gguf
Provenance
- Source model: https://huggingface.co/IQuestLab/IQuest-Coder-V1-7B-Instruct
- Converted + quantized with llama.cpp (
convert_hf_to_gguf.py+llama-quantize). - Metadata hygiene scan passed for local/personal identifiers before upload.
License
This repo redistributes quantized weights from IQuestLab/IQuest-Coder-V1-7B-Instruct and includes the upstream LICENSE file verbatim.
Please follow upstream license terms, including the IQuest commercial UI attribution requirement.
- Downloads last month
- 111
Hardware compatibility
Log In to add your hardware
4-bit
6-bit
8-bit
Model tree for TheEpTic/IQuest-Coder-V1-7B-Instruct-GGUF
Base model
IQuestLab/IQuest-Coder-V1-7B-Instruct