IQuest-Coder-V1-7B-Instruct-GGUF

GGUF quant collection for IQuestLab/IQuest-Coder-V1-7B-Instruct.

Included quants

  • IQuest-Coder-V1-7B-Instruct-Q4_K_M.gguf
  • IQuest-Coder-V1-7B-Instruct-Q6_K.gguf
  • IQuest-Coder-V1-7B-Instruct-Q8_0.gguf

Checksums

  • db34a5f95f4f6051c3ee6595f764c4e13ba0c1e59ad29879cec57fae5d446c1f IQuest-Coder-V1-7B-Instruct-Q4_K_M.gguf
  • 3335a2de23107c42aee256b38be1e82a9a00027935f7eb972a0ab5f5bf1072d7 IQuest-Coder-V1-7B-Instruct-Q6_K.gguf
  • 145e39007993fdb8fff9ca6a9c4a6ac53ce208f8549b783ffdf20ade178c8c5b IQuest-Coder-V1-7B-Instruct-Q8_0.gguf

Provenance

License

This repo redistributes quantized weights from IQuestLab/IQuest-Coder-V1-7B-Instruct and includes the upstream LICENSE file verbatim. Please follow upstream license terms, including the IQuest commercial UI attribution requirement.

Downloads last month
111
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for TheEpTic/IQuest-Coder-V1-7B-Instruct-GGUF

Quantized
(8)
this model