File size: 4,016 Bytes
ff44132 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 |
---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- code
- agent
- benchmark
- evaluation
pretty_name: OctoCodingBench
size_categories:
- n<1K
---
# OctoCodingBench: Instruction-Following Benchmark for Coding Agents
[English](README.md) | [中文](README_CN.md)
## 🌟 Overview
**OctoCodingBench** is a comprehensive benchmark for evaluating how well AI coding agents follow instructions from multiple sources. Unlike existing benchmarks that focus solely on task completion, OctoCodingBench systematically tests whether agents respect constraints from:
- **System Prompts (SP)** — Role definitions, output formats, workflow rules
- **System Reminders** — Behavior correction, tool usage reminders, information confidentiality
- **User Queries** — Task requirements, multi-turn instruction changes
- **Project Documentation (Agents.md)** — Coding conventions from `CLAUDE.md`, `AGENTS.md`
- **Skills** — Skill invocation workflows and protocols
- **Memory** — User preferences and project context continuation
- **Tool Schema** — Parameter correctness, call sequence, no hallucinated results
## 🚀 Key Features
- **Multi-Source Instruction Evaluation**: Tests agent compliance across 7 distinct instruction categories
- **Checklist-Based Scoring**: Each instance includes a structured checklist with binary-decidable checks
- **Real-World Scenarios**: Tasks derived from actual development workflows
- **Multi-Scaffold Support**: Evaluated across Claude Code, Kilo, and Droid environments
## 📦 Dataset Contents
This release contains **72 curated instances** with:
- Natural language task specifications
- System prompts with behavioral constraints
- Structured evaluation checklists (2,422 total check items)
- Category and scaffold metadata
## 📊 Dataset Statistics
| Category | Instances |
|----------|-----------|
| Skill | 17 |
| Claude.md | 15 |
| AGENTS.md | 13 |
| Memory | 12 |
| System Prompt | 11 |
| User Query | 4 |
| **Total** | **72** |
| Scaffold | Instances |
|----------|-----------|
| Claude Code | 54 |
| Kilo | 11 |
| Droid | 7 |
| Metric | Value |
|--------|-------|
| Total check items | 2,422 |
| Avg checks per instance | 33.6 |
## 📝 Data Format
Each instance is a JSON object with the following fields:
```json
{
"instance_id": "md-course-builder-conventional-commits",
"user_query": ["Implement the feature as specified..."],
"system_prompt": "You are a CLI assistant...",
"category": "Claude.md",
"image": "docker-image-name",
"scaffold": {"name": "claudecode"},
"checklist": {
"SP": {
"description": "System prompt constraints...",
"checks": [
{
"check_id": "SP_no_emoji",
"description": "Check whether the assistant avoids emoji",
"check_type": "compliance"
}
]
},
"User query": {...}
}
}
```
| Field | Description |
|-------|-------------|
| `instance_id` | Unique task identifier |
| `user_query` | List of user messages (supports multi-turn) |
| `system_prompt` | System-level behavioral constraints |
| `category` | Primary instruction source being tested |
| `image` | Docker image for task environment |
| `scaffold` | Agent scaffold configuration |
| `checklist` | Structured evaluation criteria |
## 💻 Usage
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("MiniMaxAI/OctoCodingBench")
# Filter by category
skill_tasks = [d for d in dataset["train"] if d["category"] == "Skill"]
# Filter by scaffold
claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "claudecode"]
```
## ⚖️ Evaluation Metrics
- **ISR (Instance Success Rate)**: 1 if all checks pass, 0 otherwise
- **CSR (Checklist Success Rate)**: Proportion of passed checks
## 📜 Citation
```bibtex
@misc{octocodingbench2026,
title={OctoCodingBench: Instruction-Following Benchmark for Coding Agents},
author={MiniMax},
year={2026},
publisher={Hugging Face}
}
```
|