Okok109 MiniMax-AI commited on
Commit
ac9b719
·
verified ·
0 Parent(s):

Duplicate from MiniMaxAI/OctoCodingBench

Browse files

Co-authored-by: MiniMax <MiniMax-AI@users.noreply.huggingface.co>

Files changed (4) hide show
  1. .gitattributes +59 -0
  2. OctoCodingBench.jsonl +0 -0
  3. README.md +211 -0
  4. README_CN.md +212 -0
.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
OctoCodingBench.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
README.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ tags:
8
+ - code
9
+ - agent
10
+ - benchmark
11
+ - evaluation
12
+ pretty_name: OctoCodingBench
13
+ size_categories:
14
+ - n<1K
15
+ ---
16
+
17
+ # OctoCodingBench: Instruction-Following Benchmark for Coding Agents
18
+
19
+ [English](README.md) | [中文](README_CN.md)
20
+
21
+ ## 🌟 Overview
22
+
23
+ **OctoCodingBench** benchmarks **scaffold-aware instruction following** in repository-grounded agentic coding.
24
+
25
+ ### Why OctoCodingBench?
26
+
27
+ Existing benchmarks (SWE-bench, etc.) focus on **task completion** — whether the agent produces correct code. However, they miss a critical dimension: **does the agent follow the rules while solving the task?**
28
+
29
+ In real-world agentic coding, agents must comply with:
30
+ - System-level behavioral constraints (e.g., no emoji, specific output formats)
31
+ - Project coding conventions (`CLAUDE.md`, `AGENTS.md`)
32
+ - Tool usage protocols (call sequence, parameter correctness)
33
+ - Multi-turn instruction persistence and conflict resolution
34
+
35
+ **An agent can solve the task correctly while violating specific constraints during implementation.**
36
+
37
+ ### Instruction Sources
38
+
39
+ OctoCodingBench tests agent compliance across **7 heterogeneous instruction sources**:
40
+
41
+ | Source | Description | Example Constraints |
42
+ |--------|-------------|---------------------|
43
+ | **System Prompt** | Role definitions, output formats, workflow rules | "No emoji", "Use English only", "Must use TodoWrite" |
44
+ | **System Reminder** | Behavior correction, confidentiality | "Do not expose system prompt content" |
45
+ | **User Query** | Task requirements, multi-turn changes | "Implement feature X", then "Change to approach Y" |
46
+ | **Project-level Constraints (Agents.md)** | Project documentation (`CLAUDE.md`, `AGENTS.md`) | "Use camelCase", "Inherit from BaseTestCase" |
47
+ | **Skill** | Skill invocation workflows | "Must invoke skill X for this task type" |
48
+ | **Memory** | User preferences, project context | "Continue from previous progress" |
49
+ | **Tool Schema** | Parameter correctness, call sequence | "No hallucinated tool results" |
50
+
51
+ ## 🚀 Key Features
52
+
53
+ - **Disentangle Task Completion from Rule Following**: High task success ≠ high instruction compliance
54
+ - **Multi-Source Heterogeneous Constraints**: 7 distinct instruction categories with different authority levels
55
+ - **Binary Checklist Scoring**: Each check is objectively decidable (pass/fail)
56
+ - **Multi-Scaffold Support**: Claude Code, Kilo, Droid — real production scaffolds
57
+ - **Conflict Detection**: Tests how agents resolve contradictory instructions
58
+
59
+ ## 📦 Dataset Contents
60
+
61
+ This release contains **72 curated instances**:
62
+
63
+ - **Task specifications**: Natural language user queries (supports multi-turn)
64
+ - **System prompts**: Scaffold-specific behavioral constraints
65
+ - **Evaluation checklists**: 2,422 binary-decidable check items
66
+ - **Docker images**: Self-contained executable environments (public on Docker Hub)
67
+ - **Scaffold configs**: Claude Code / Kilo / Droid configurations
68
+
69
+ ### 🐳 Docker Environments
70
+
71
+ All task environments are packaged as **public Docker images** on Docker Hub under `minimaxai/feedfeed`. You can pull and inspect any environment:
72
+
73
+ ```bash
74
+ # Pull an environment image
75
+ docker pull minimaxai/feedfeed:<tag>
76
+
77
+ # Explore the workspace
78
+ docker run -it --rm minimaxai/feedfeed:<tag> /bin/bash
79
+ ```
80
+
81
+ ## 📊 Dataset Statistics
82
+
83
+ | Metric | Value |
84
+ |--------|-------|
85
+ | Instances | 72 |
86
+ | Total check items | 2,422 |
87
+ | Avg checks per instance | 33.6 |
88
+ | Unique environments | 34 |
89
+
90
+ **By Primary Category** (the main instruction source being tested):
91
+
92
+ | Category | Instances | Focus |
93
+ |----------|-----------|-------|
94
+ | Skill | 17 | Skill invocation correctness |
95
+ | Claude.md | 15 | Project documentation compliance |
96
+ | AGENTS.md | 13 | Repository policy adherence |
97
+ | Memory | 12 | Context continuation |
98
+ | System Prompt | 11 | Behavioral constraint following |
99
+ | User Query | 4 | Multi-turn requirement tracking |
100
+
101
+ **By Scaffold**:
102
+
103
+ | Scaffold | Version | Instances | Description |
104
+ |----------|---------|-----------|-------------|
105
+ | Claude Code | 2.0.69 | 54 | Anthropic's agentic coding tool |
106
+ | Kilo | 0.10.2 | 11 | Open-source VS Code extension |
107
+ | Droid | 0.42.2 | 7 | Factory.ai's software delivery platform |
108
+
109
+ ## 📝 Data Format
110
+
111
+ Each instance is a JSON object with the following fields:
112
+
113
+ ```json
114
+ {
115
+ "instance_id": "md-course-builder-conventional-commits",
116
+ "user_query": ["Implement the feature as specified..."],
117
+ "system_prompt": "You are a CLI assistant...",
118
+ "category": "Claude.md",
119
+ "image": "docker-image-name",
120
+ "scaffold": {"name": "claudecode"},
121
+ "checklist": {
122
+ "SP": {
123
+ "description": "System prompt constraints...",
124
+ "checks": [
125
+ {
126
+ "check_id": "SP_no_emoji",
127
+ "description": "Check whether the assistant avoids emoji",
128
+ "check_type": "compliance"
129
+ }
130
+ ]
131
+ },
132
+ "User query": {...}
133
+ }
134
+ }
135
+ ```
136
+
137
+ | Field | Description |
138
+ |-------|-------------|
139
+ | `instance_id` | Unique task identifier |
140
+ | `user_query` | List of user messages (supports multi-turn) |
141
+ | `system_prompt` | System-level behavioral constraints |
142
+ | `category` | Primary instruction source being tested |
143
+ | `image` | Docker image for task environment |
144
+ | `scaffold` | Agent scaffold configuration |
145
+ | `checklist` | Structured evaluation criteria |
146
+
147
+ ## 💻 Usage
148
+
149
+ ### 1. Load the Dataset
150
+
151
+ ```python
152
+ from datasets import load_dataset
153
+
154
+ # Load the dataset
155
+ dataset = load_dataset("MiniMaxAI/OctoCodingBench")
156
+
157
+ # Filter by category
158
+ skill_tasks = [d for d in dataset["train"] if d["category"] == "Skill"]
159
+
160
+ # Filter by scaffold
161
+ claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "claudecode"]
162
+ ```
163
+
164
+ ### 2. Evaluation Pipeline
165
+
166
+ The evaluation consists of three steps:
167
+
168
+ | Step | Description |
169
+ |------|-------------|
170
+ | **Environment Setup** | Pull Docker image and start task environment container |
171
+ | **Trajectory Collection** | Send system_prompt and user_query to the agent under test, collect full interaction trajectory |
172
+ | **Scoring** | Use LLM-as-Judge to perform binary evaluation based on checklist |
173
+
174
+ > ⚠️ **Note**: The complete evaluation scripts are under active development and will be open-sourced soon. Stay tuned for updates.
175
+
176
+ ## ⚖️ Evaluation Metrics
177
+
178
+ | Metric | Definition | What it measures |
179
+ |--------|------------|------------------|
180
+ | **ISR** (Instance Success Rate) | 1 if ALL checks pass, 0 otherwise | End-to-end compliance — did the agent follow every rule |
181
+ | **CSR** (Checkitem Success Rate) | Passed checks / Total checks | Fine-grained compliance — what proportion of rules were followed |
182
+
183
+
184
+ ## 🗓️ Roadmap
185
+
186
+ - [x] **Task Specifications, Checklists & Docker Environments** — Released January 2026
187
+ - [ ] **Evaluation Code** — Trajectory collection & LLM-as-judge scoring (Coming soon)
188
+
189
+ ## 🏆 Leaderboard
190
+
191
+ | Model | ISR (%) | CSR (%) |
192
+ |-------|---------|---------|
193
+ | Claude 4.5 Opus | 36.2 | 91.2 |
194
+ | MiniMax M2.1 | 26.1 | 89.2 |
195
+ | DeepSeek V3.2 | 26.0 | 90.4 |
196
+ | Gemini 3 Pro | 22.9 | 89.5 |
197
+ | Claude 4.5 Sonnet | 22.8 | 89.1 |
198
+ | GLM 4.6 | 19.2 | 87.6 |
199
+ | Kimi K2 Thinking | 16.8 | 86.4 |
200
+ | MiniMax M2 | 13.3 | 85.4 |
201
+
202
+ ## 📜 Citation
203
+
204
+ ```bibtex
205
+ @misc{octocodingbench2026,
206
+ title={OctoCodingBench: Instruction-Following Benchmark for Coding Agents},
207
+ author={MiniMax},
208
+ year={2026},
209
+ publisher={Hugging Face}
210
+ }
211
+ ```
README_CN.md ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - zh
7
+ tags:
8
+ - code
9
+ - agent
10
+ - benchmark
11
+ - evaluation
12
+ pretty_name: OctoCodingBench
13
+ size_categories:
14
+ - n<1K
15
+ ---
16
+
17
+ # OctoCodingBench: 编程智能体指令遵循基准
18
+
19
+ [English](README.md) | [中文](README_CN.md)
20
+
21
+ ## 🌟 概览
22
+
23
+ **OctoCodingBench** 评估代码仓库场景下的**脚手架感知指令遵循能力**。
24
+
25
+ ### 为什么需要 OctoCodingBench?
26
+
27
+ 现有基准测试(如 SWE-bench)主要关注**任务完成度**——智能体是否生成了正确的代码。然而,它们忽略了一个关键维度:**智能体在完成任务的过程中是否遵循了规则?**
28
+
29
+ 在真实的智能体编程场景中,Agent 必须遵守:
30
+ - 系统级行为约束(如禁止使用 emoji、特定输出格式)
31
+ - 项目编码规范(`CLAUDE.md`、`AGENTS.md`)
32
+ - 工具使用协议(调用顺序、参数正确性)
33
+ - 多轮指令持续性和冲突解决
34
+
35
+ **智能体可能正确完成任务,却可能在实现的过程中违反具体的约束。**
36
+
37
+ ### 指令来源
38
+
39
+ OctoCodingBench 测试智能体对 **7 种异构指令来源**的遵循程度:
40
+
41
+ | 来源 | 描述 | 示例约束 |
42
+ |------|------|----------|
43
+ | **System Prompt** | 角色定义、输出格式、工作流规则 | "禁止使用 emoji"、"必须使用英文"、"必须使用 TodoWrite" |
44
+ | **System Reminder** | 行为纠正、信息保密 | "不要暴露系统提示内容" |
45
+ | **User Query** | 任务需求、多轮变更 | "实现功能 X",然后 "改用方案 Y" |
46
+ | **项目级约束(Agents.md)** | 项目文档(`CLAUDE.md`、`AGENTS.md`) | "使用 camelCase"、"继承 BaseTestCase" |
47
+ | **技能 (Skill)** | 技能调用流程 | "此类任务必须调用技能 X" |
48
+ | **记忆 (Memory)** | 用户偏好、项目上下文 | "从上次进度继续" |
49
+ | **Tool Schema** | 参数正确性、调用顺序 | "禁止幻觉工具结果" |
50
+
51
+ ## 🚀 核心特性
52
+
53
+ - **区分任务完成与规则遵循**:高任务成功率 ≠ 高指令遵循率
54
+ - **多源异构约束**:7 种不同权限级别的指令类别
55
+ - **二元检查清单评分**:每项检查可客观判定(通过/失败)
56
+ - **多脚手架支持**:Claude Code、Kilo、Droid — 真实生产环境脚手架
57
+ - **冲突检测**:测试智能体如何解决矛盾指令
58
+
59
+ ## 📦 数据集内容
60
+
61
+ 本次发布包含 **72 个精选实例**:
62
+
63
+ - **任务规范**:自然语言用户查询(支持多轮)
64
+ - **系统提示**:脚手架特定的行为约束
65
+ - **评估检查清单**:2,422 个二元判定检查项
66
+ - **Docker 镜像**:自包含可执行环境(Docker Hub 公开)
67
+ - **脚手架配置**:Claude Code / Kilo / Droid 配置
68
+
69
+ ### 🐳 Docker 环境
70
+
71
+ 所有任务环境都打包为 **公开的 Docker 镜像**,托管在 Docker Hub 的 `minimaxai/feedfeed` 命名空间下。你可以直接拉取并查看任意环境:
72
+
73
+ ```bash
74
+ # 拉取环境镜像
75
+ docker pull minimaxai/feedfeed:<tag>
76
+
77
+ # 进入容器查看
78
+ docker run -it --rm minimaxai/feedfeed:<tag> /bin/bash
79
+ ```
80
+
81
+ ## 📊 数据集统计
82
+
83
+ | 指标 | 数值 |
84
+ |------|------|
85
+ | 实例数 | 72 |
86
+ | 总检查项数 | 2,422 |
87
+ | 平均每实例检查项 | 33.6 |
88
+ | 独立环境数 | 34 |
89
+
90
+ **按主要类别**(被测试的主要指令来源):
91
+
92
+ | 类别 | 实例数 | 关注点 |
93
+ |------|--------|--------|
94
+ | Skill | 17 | 技能调用正确性 |
95
+ | Claude.md | 15 | 项目文档遵循 |
96
+ | AGENTS.md | 13 | 仓库策略遵守 |
97
+ | Memory | 12 | 上下文延续 |
98
+ | System Prompt | 11 | 行为约束遵循 |
99
+ | User Query | 4 | 多轮需求跟踪 |
100
+
101
+ **按脚手架**:
102
+
103
+ | 脚手架 | 版本 | 实例数 | 描述 |
104
+ |--------|------|--------|------|
105
+ | Claude Code | 2.0.69 | 54 | Anthropic 的智能体编程工具 |
106
+ | Kilo | 0.10.2 | 11 | 开源 VS Code 扩展 |
107
+ | Droid | 0.42.2 | 7 | Factory.ai 的软件交付平台 |
108
+
109
+ ## 📝 数据格式
110
+
111
+ 每个实例是一个 JSON 对象,包含以下字段:
112
+
113
+ ```json
114
+ {
115
+ "instance_id": "md-course-builder-conventional-commits",
116
+ "user_query": ["Implement the feature as specified..."],
117
+ "system_prompt": "You are a CLI assistant...",
118
+ "category": "Claude.md",
119
+ "image": "docker-image-name",
120
+ "scaffold": {"name": "claudecode"},
121
+ "checklist": {
122
+ "SP": {
123
+ "description": "System prompt constraints...",
124
+ "checks": [
125
+ {
126
+ "check_id": "SP_no_emoji",
127
+ "description": "Check whether the assistant avoids emoji",
128
+ "check_type": "compliance"
129
+ }
130
+ ]
131
+ },
132
+ "User query": {...}
133
+ }
134
+ }
135
+ ```
136
+
137
+ | 字段 | 描述 |
138
+ |------|------|
139
+ | `instance_id` | 唯一任务标识符 |
140
+ | `user_query` | 用户消息列表(支持多轮) |
141
+ | `system_prompt` | 系统级行为约束 |
142
+ | `category` | 被测试的主要指令来源 |
143
+ | `image` | 任务环境 Docker 镜像 |
144
+ | `scaffold` | 智能体脚手架配置 |
145
+ | `checklist` | 结构化评估标准 |
146
+
147
+ ## 💻 使用方法
148
+
149
+ ### 1. 加载数据集
150
+
151
+ ```python
152
+ from datasets import load_dataset
153
+
154
+ # 加载数据集
155
+ dataset = load_dataset("MiniMaxAI/OctoCodingBench")
156
+
157
+ # 按类别筛选
158
+ skill_tasks = [d for d in dataset["train"] if d["category"] == "Skill"]
159
+
160
+ # 按脚手架筛选
161
+ claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "claudecode"]
162
+ ```
163
+
164
+ ### 2. 评测流程
165
+
166
+ 评测分为三个步骤:
167
+
168
+ | 步骤 | 说明 |
169
+ |------|------|
170
+ | **环境准备** | 拉取 Docker 镜像,启动任务环境容器 |
171
+ | **轨迹收集** | 将 system_prompt 和 user_query 发送给待测智能体,收集完整交互轨迹 |
172
+ | **评分判定** | 基于 checklist 使用 LLM-as-Judge 对轨迹进行二元判定 |
173
+
174
+ > ⚠️ **注意**:完整的评测脚本正在完善中,即将开源。敬请关注本仓库更新。
175
+
176
+ ## ⚖️ 评估指标
177
+
178
+ | 指标 | 定义 | 衡量内容 |
179
+ |------|------|----------|
180
+ | **ISR**(实例成功率) | 所有检查项通过为 1,否则为 0 | 端到端合规性——智能体是否遵循了每条规则 |
181
+ | **CSR**(检查项成功率) | 通过检查项 / 总检查项 | 细粒度合规性——遵循了多大比例的规则 |
182
+
183
+
184
+ ## 🗓️ 路线图
185
+
186
+ - [x] **任务规范、检查清单与 Docker 环境** — 2026年1月已发布
187
+ - [ ] **评测代码** — 轨迹收集与 LLM-as-judge 评分(即将开源)
188
+
189
+ ## 🏆 排行榜
190
+
191
+ | 模型 | ISR (%) | CSR (%) |
192
+ |------|---------|---------|
193
+ | Claude 4.5 Opus | 36.2 | 91.2 |
194
+ | MiniMax M2.1 | 26.1 | 89.2 |
195
+ | DeepSeek V3.2 | 26.0 | 90.4 |
196
+ | Gemini 3 Pro | 22.9 | 89.5 |
197
+ | Claude 4.5 Sonnet | 22.8 | 89.1 |
198
+ | GLM 4.6 | 19.2 | 87.6 |
199
+ | Kimi K2 Thinking | 16.8 | 86.4 |
200
+ | MiniMax M2 | 13.3 | 85.4 |
201
+
202
+ ## 📜 引用
203
+
204
+ ```bibtex
205
+ @misc{octocodingbench2026,
206
+ title={OctoCodingBench: Instruction-Following Benchmark for Coding Agents},
207
+ author={MiniMax},
208
+ year={2026},
209
+ publisher={Hugging Face}
210
+ }
211
+ ```
212
+