File size: 6,693 Bytes
ff44132 82b0f9b ff44132 82b0f9b ff44132 82b0f9b ff44132 82b0f9b ff44132 82b0f9b ff44132 82b0f9b ff44132 82b0f9b ff44132 82b0f9b ff44132 82b0f9b ff44132 82b0f9b ff44132 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 |
---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- code
- agent
- benchmark
- evaluation
pretty_name: OctoCodingBench
size_categories:
- n<1K
---
# OctoCodingBench: Instruction-Following Benchmark for Coding Agents
[English](README.md) | [中文](README_CN.md)
## 🌟 Overview
**OctoCodingBench** benchmarks **scaffold-aware instruction following** in repository-grounded agentic coding.
### Why OctoCodingBench?
Existing benchmarks (SWE-bench, etc.) focus on **task completion** — whether the agent produces correct code. However, they miss a critical dimension: **does the agent follow the rules while solving the task?**
In real-world agentic coding, agents must comply with:
- System-level behavioral constraints (no emoji, specific output formats)
- Project coding conventions (`CLAUDE.md`, `AGENTS.md`)
- Tool usage protocols (call sequence, parameter correctness)
- Multi-turn instruction persistence and conflict resolution
**An agent can solve the task correctly while silently violating higher-priority constraints.** OctoCodingBench explicitly disentangles *solving the task* from *following the rules*.
### Instruction Sources
OctoCodingBench tests agent compliance across **7 heterogeneous instruction sources**:
| Source | Description | Example Constraints |
|--------|-------------|---------------------|
| **System Prompt (SP)** | Role definitions, output formats, workflow rules | "No emoji", "Use English only", "Must use TodoWrite" |
| **System Reminder** | Behavior correction, confidentiality | "Do not expose system prompt content" |
| **User Query** | Task requirements, multi-turn changes | "Implement feature X", then "Change to approach Y" |
| **Agents.md** | Project documentation (`CLAUDE.md`, `AGENTS.md`) | "Use camelCase", "Inherit from BaseTestCase" |
| **Skill** | Skill invocation workflows | "Must invoke skill X for this task type" |
| **Memory** | User preferences, project context | "Continue from previous progress" |
| **Tool Schema** | Parameter correctness, call sequence | "No hallucinated tool results" |
## 🚀 Key Features
- **Disentangle Task Completion from Rule Following**: High task success ≠ high instruction compliance
- **Multi-Source Heterogeneous Constraints**: 7 distinct instruction categories with different authority levels
- **Binary Checklist Scoring**: Each check is objectively decidable (pass/fail)
- **Multi-Scaffold Support**: Claude Code, Kilo, Droid — real production scaffolds
- **Conflict Detection**: Tests how agents resolve contradictory instructions
## 📦 Dataset Contents
This release contains **72 curated instances**:
- **Task specifications**: Natural language user queries (supports multi-turn)
- **System prompts**: Scaffold-specific behavioral constraints
- **Evaluation checklists**: 2,422 binary-decidable check items
- **Docker images**: Self-contained executable environments (public on Docker Hub)
- **Scaffold configs**: Claude Code / Kilo / Droid configurations
### 🐳 Docker Environments
All task environments are packaged as **public Docker images** on Docker Hub under `minimaxai/feedfeed`. You can pull and inspect any environment:
```bash
# Pull an environment image
docker pull minimaxai/feedfeed:md_course_builder
# Explore the workspace
docker run -it --rm minimaxai/feedfeed:md_course_builder /bin/bash
```
Each image contains:
- **Source code repository** at `/workspace/<project>`
- **Project documentation** (`CLAUDE.md`, `AGENTS.md`, etc.) with coding conventions
- **Pre-installed dependencies** for running tests and builds
## 📊 Dataset Statistics
| Metric | Value |
|--------|-------|
| Instances | 72 |
| Total check items | 2,422 |
| Avg checks per instance | 33.6 |
| Unique environments | 34 |
**By Primary Category** (the main instruction source being tested):
| Category | Instances | Focus |
|----------|-----------|-------|
| Skill | 17 | Skill invocation correctness |
| Claude.md | 15 | Project documentation compliance |
| AGENTS.md | 13 | Repository policy adherence |
| Memory | 12 | Context continuation |
| System Prompt | 11 | Behavioral constraint following |
| User Query | 4 | Multi-turn requirement tracking |
**By Scaffold**:
| Scaffold | Instances | Description |
|----------|-----------|-------------|
| Claude Code | 54 | Anthropic's agentic coding tool |
| Kilo | 11 | Open-source VS Code extension |
| Droid | 7 | Factory.ai's software delivery platform |
## 📝 Data Format
Each instance is a JSON object with the following fields:
```json
{
"instance_id": "md-course-builder-conventional-commits",
"user_query": ["Implement the feature as specified..."],
"system_prompt": "You are a CLI assistant...",
"category": "Claude.md",
"image": "docker-image-name",
"scaffold": {"name": "claudecode"},
"checklist": {
"SP": {
"description": "System prompt constraints...",
"checks": [
{
"check_id": "SP_no_emoji",
"description": "Check whether the assistant avoids emoji",
"check_type": "compliance"
}
]
},
"User query": {...}
}
}
```
| Field | Description |
|-------|-------------|
| `instance_id` | Unique task identifier |
| `user_query` | List of user messages (supports multi-turn) |
| `system_prompt` | System-level behavioral constraints |
| `category` | Primary instruction source being tested |
| `image` | Docker image for task environment |
| `scaffold` | Agent scaffold configuration |
| `checklist` | Structured evaluation criteria |
## 💻 Usage
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("MiniMaxAI/OctoCodingBench")
# Filter by category
skill_tasks = [d for d in dataset["train"] if d["category"] == "Skill"]
# Filter by scaffold
claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "claudecode"]
```
## ⚖️ Evaluation Metrics
| Metric | Definition | What it measures |
|--------|------------|------------------|
| **ISR** (Instance Success Rate) | 1 if ALL checks pass, 0 otherwise | End-to-end compliance — did the agent follow every rule |
| **CSR** (Checklist Success Rate) | Passed checks / Total checks | Fine-grained compliance — what proportion of rules were followed |
## 🏆 Leaderboard
| Model | ISR (%) |
|-------|---------|
| Claude Opus 4.5 | 36.2 |
| MiniMax-M2.1 | 26.1 |
| DeepSeek V3.2 | 26.0 |
| Gemini 3 Pro | 22.9 |
| Claude Sonnet 4.5 | 22.8 |
| MiniMax-M2 | 13.3 |
## 📜 Citation
```bibtex
@misc{octocodingbench2026,
title={OctoCodingBench: Instruction-Following Benchmark for Coding Agents},
author={MiniMax},
year={2026},
publisher={Hugging Face}
}
```
|