Instructions to use IQuestLab/IQuest-Coder-V1-7B-Base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use IQuestLab/IQuest-Coder-V1-7B-Base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="IQuestLab/IQuest-Coder-V1-7B-Base", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("IQuestLab/IQuest-Coder-V1-7B-Base", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use IQuestLab/IQuest-Coder-V1-7B-Base with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "IQuestLab/IQuest-Coder-V1-7B-Base" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "IQuestLab/IQuest-Coder-V1-7B-Base", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/IQuestLab/IQuest-Coder-V1-7B-Base
- SGLang
How to use IQuestLab/IQuest-Coder-V1-7B-Base with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "IQuestLab/IQuest-Coder-V1-7B-Base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "IQuestLab/IQuest-Coder-V1-7B-Base", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "IQuestLab/IQuest-Coder-V1-7B-Base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "IQuestLab/IQuest-Coder-V1-7B-Base", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use IQuestLab/IQuest-Coder-V1-7B-Base with Docker Model Runner:
docker model run hf.co/IQuestLab/IQuest-Coder-V1-7B-Base
Update README.md
Browse files
README.md
CHANGED
|
@@ -61,7 +61,7 @@ For the IQuest-Coder-V1-Thinking: We suggest using Temperature=1.0, TopP=0.95, T
|
|
| 61 |
|
| 62 |
IQuest-Coder-V1 is a new family of code large language models (LLMs) designed to advance autonomous software engineering and code intelligence. Built on the innovative code-flow multi-stage training paradigm, IQuest-Coder-V1 captures the dynamic evolution of software logic, delivering state-of-the-art performance across critical dimensions:
|
| 63 |
|
| 64 |
-
- **
|
| 65 |
- **Code-Flow Training Paradigm**: Moving beyond static code representations, our models learn from repository evolution patterns, commit transitions, and dynamic code transformations to understand real-world software development processes.
|
| 66 |
- **Dual Specialization Paths**: Bifurcated post-training delivers two specialized variants—Thinking models (utilizing reasoning-driven RL for complex problem-solving) and Instruct models (optimized for general coding assistance and instruction-following).
|
| 67 |
- **Efficient Architecture**: The IQuest-Coder-V1-Loop variant introduces a recurrent mechanism that optimizes the trade-off between model capacity and deployment footprint. The 7B and 14B models adopt shallow architectures for faster inference speed.
|
|
@@ -205,7 +205,7 @@ claude --model IQuestCoder-V1-7B-Instruct
|
|
| 205 |
| **BigCodeBench** | 0.0 | - |
|
| 206 |
| **FullStackBench** | 0.0 | - |
|
| 207 |
| **CruxEval** | 0.0 | - |
|
| 208 |
-
| **LiveCodeBench** |
|
| 209 |
| **Aider-Polyglot** | 0.95 | 0.85 |
|
| 210 |
| **Mercury** | 0.2 | 0.85 |
|
| 211 |
| **Bird** | 0.2 | 0.95 |
|
|
|
|
| 61 |
|
| 62 |
IQuest-Coder-V1 is a new family of code large language models (LLMs) designed to advance autonomous software engineering and code intelligence. Built on the innovative code-flow multi-stage training paradigm, IQuest-Coder-V1 captures the dynamic evolution of software logic, delivering state-of-the-art performance across critical dimensions:
|
| 63 |
|
| 64 |
+
- **Performance**: Achieves leading results on SWE-Bench Verified (76.2%), BigCodeBench (49.9%), LiveCodeBench v6 (81.1%), and other major coding benchmarks, surpassing competitive models across agentic software engineering, competitive programming, and complex tool use.
|
| 65 |
- **Code-Flow Training Paradigm**: Moving beyond static code representations, our models learn from repository evolution patterns, commit transitions, and dynamic code transformations to understand real-world software development processes.
|
| 66 |
- **Dual Specialization Paths**: Bifurcated post-training delivers two specialized variants—Thinking models (utilizing reasoning-driven RL for complex problem-solving) and Instruct models (optimized for general coding assistance and instruction-following).
|
| 67 |
- **Efficient Architecture**: The IQuest-Coder-V1-Loop variant introduces a recurrent mechanism that optimizes the trade-off between model capacity and deployment footprint. The 7B and 14B models adopt shallow architectures for faster inference speed.
|
|
|
|
| 205 |
| **BigCodeBench** | 0.0 | - |
|
| 206 |
| **FullStackBench** | 0.0 | - |
|
| 207 |
| **CruxEval** | 0.0 | - |
|
| 208 |
+
| **LiveCodeBench** | 0.6 | 0.95 |
|
| 209 |
| **Aider-Polyglot** | 0.95 | 0.85 |
|
| 210 |
| **Mercury** | 0.2 | 0.85 |
|
| 211 |
| **Bird** | 0.2 | 0.95 |
|