zwpride-iquestlab commited on
Commit
d479f88
·
verified ·
1 Parent(s): 67ce5a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -62,7 +62,7 @@ For the IQuest-Coder-V1-Thinking: We suggest using Temperature=1.0, TopP=0.95, T
62
 
63
  IQuest-Coder-V1 is a new family of code large language models (LLMs) designed to advance autonomous software engineering and code intelligence. Built on the innovative code-flow multi-stage training paradigm, IQuest-Coder-V1 captures the dynamic evolution of software logic, delivering state-of-the-art performance across critical dimensions:
64
 
65
- - **State-of-the-Art Performance**: Achieves leading results on SWE-Bench Verified, BigCodeBench, LiveCodeBench v6, and other major coding benchmarks, surpassing competitive models across agentic software engineering, competitive programming, and complex tool use.
66
  - **Code-Flow Training Paradigm**: Moving beyond static code representations, our models learn from repository evolution patterns, commit transitions, and dynamic code transformations to understand real-world software development processes.
67
  - **Dual Specialization Paths**: Bifurcated post-training delivers two specialized variants—Thinking models (utilizing reasoning-driven RL for complex problem-solving) and Instruct models (optimized for general coding assistance and instruction-following).
68
  - **Efficient Architecture**: The IQuest-Coder-V1-Loop variant introduces a recurrent mechanism that optimizes the trade-off between model capacity and deployment footprint. The 7B and 14B models adopt shallow architectures for faster inference speed.
@@ -206,7 +206,7 @@ claude --model IQuestCoder-V1-7B-Instruct
206
  | **BigCodeBench** | 0.0 | - |
207
  | **FullStackBench** | 0.0 | - |
208
  | **CruxEval** | 0.0 | - |
209
- | **LiveCodeBench** | 1.0 | 1.0 |
210
  | **Aider-Polyglot** | 0.95 | 0.85 |
211
  | **Mercury** | 0.2 | 0.85 |
212
  | **Bird** | 0.2 | 0.95 |
 
62
 
63
  IQuest-Coder-V1 is a new family of code large language models (LLMs) designed to advance autonomous software engineering and code intelligence. Built on the innovative code-flow multi-stage training paradigm, IQuest-Coder-V1 captures the dynamic evolution of software logic, delivering state-of-the-art performance across critical dimensions:
64
 
65
+ - **Performance**: Achieves leading results on SWE-Bench Verified (76.2%), BigCodeBench (49.9%), LiveCodeBench v6 (81.1%), and other major coding benchmarks, surpassing competitive models across agentic software engineering, competitive programming, and complex tool use.
66
  - **Code-Flow Training Paradigm**: Moving beyond static code representations, our models learn from repository evolution patterns, commit transitions, and dynamic code transformations to understand real-world software development processes.
67
  - **Dual Specialization Paths**: Bifurcated post-training delivers two specialized variants—Thinking models (utilizing reasoning-driven RL for complex problem-solving) and Instruct models (optimized for general coding assistance and instruction-following).
68
  - **Efficient Architecture**: The IQuest-Coder-V1-Loop variant introduces a recurrent mechanism that optimizes the trade-off between model capacity and deployment footprint. The 7B and 14B models adopt shallow architectures for faster inference speed.
 
206
  | **BigCodeBench** | 0.0 | - |
207
  | **FullStackBench** | 0.0 | - |
208
  | **CruxEval** | 0.0 | - |
209
+ | **LiveCodeBench** | 0.6 | 0.95 |
210
  | **Aider-Polyglot** | 0.95 | 0.85 |
211
  | **Mercury** | 0.2 | 0.85 |
212
  | **Bird** | 0.2 | 0.95 |