IQuest-Coder-V1-40B-Instruct-GGUF
Description
This repository contains GGUF format model files for IQuestLab's IQuest-Coder-V1-40B-Instruct.
IQuest-Coder-V1 is a state-of-the-art coding model built on a "code-flow" training paradigm. It captures the dynamic evolution of software logic, delivering exceptional performance on benchmarks like SWE-Bench Verified (81.4%) and BigCodeBench. This model natively supports a 128K context window.
Evaluation Results
Files & Quantization
To see the available files, please verify the Files and versions tab.
How to Run (llama.cpp)
Recommended Parameters: For coding tasks, a lower temperature is generally recommended for precision.
- Temperature:
0.2(for code generation) or0.7(for creative explanations) - Context:
-c(Adjust based on your VRAM/RAM. The model supports up to 128k).
CLI Example
./llama-cli -m IQuest-Coder-V1-40B-Instruct.Q4_K_M.gguf \
-c 8192 \
--temp 0.2 \
-p "User: Write a Python function to calculate the Fibonacci sequence using dynamic programming.\nAssistant:" \
-cnv
Server Example
./llama-server -m IQuest-Coder-V1-40B-Instruct.Q4_K_M.gguf \
--port 8080 \
--host 0.0.0.0 \
-c 16384 \
-ngl 99
- Downloads last month
- 197
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for AaryanK/IQuest-Coder-V1-40B-Instruct-GGUF
Base model
IQuestLab/IQuest-Coder-V1-40B-Instruct
