OctoCodingBench / README.md
Yang
Upload 2 files
82b0f9b verified
|
raw
history blame
6.69 kB
metadata
license: mit
task_categories:
  - text-generation
language:
  - en
tags:
  - code
  - agent
  - benchmark
  - evaluation
pretty_name: OctoCodingBench
size_categories:
  - n<1K

OctoCodingBench: Instruction-Following Benchmark for Coding Agents

English | 中文

🌟 Overview

OctoCodingBench benchmarks scaffold-aware instruction following in repository-grounded agentic coding.

Why OctoCodingBench?

Existing benchmarks (SWE-bench, etc.) focus on task completion — whether the agent produces correct code. However, they miss a critical dimension: does the agent follow the rules while solving the task?

In real-world agentic coding, agents must comply with:

  • System-level behavioral constraints (no emoji, specific output formats)
  • Project coding conventions (CLAUDE.md, AGENTS.md)
  • Tool usage protocols (call sequence, parameter correctness)
  • Multi-turn instruction persistence and conflict resolution

An agent can solve the task correctly while silently violating higher-priority constraints. OctoCodingBench explicitly disentangles solving the task from following the rules.

Instruction Sources

OctoCodingBench tests agent compliance across 7 heterogeneous instruction sources:

Source Description Example Constraints
System Prompt (SP) Role definitions, output formats, workflow rules "No emoji", "Use English only", "Must use TodoWrite"
System Reminder Behavior correction, confidentiality "Do not expose system prompt content"
User Query Task requirements, multi-turn changes "Implement feature X", then "Change to approach Y"
Agents.md Project documentation (CLAUDE.md, AGENTS.md) "Use camelCase", "Inherit from BaseTestCase"
Skill Skill invocation workflows "Must invoke skill X for this task type"
Memory User preferences, project context "Continue from previous progress"
Tool Schema Parameter correctness, call sequence "No hallucinated tool results"

🚀 Key Features

  • Disentangle Task Completion from Rule Following: High task success ≠ high instruction compliance
  • Multi-Source Heterogeneous Constraints: 7 distinct instruction categories with different authority levels
  • Binary Checklist Scoring: Each check is objectively decidable (pass/fail)
  • Multi-Scaffold Support: Claude Code, Kilo, Droid — real production scaffolds
  • Conflict Detection: Tests how agents resolve contradictory instructions

📦 Dataset Contents

This release contains 72 curated instances:

  • Task specifications: Natural language user queries (supports multi-turn)
  • System prompts: Scaffold-specific behavioral constraints
  • Evaluation checklists: 2,422 binary-decidable check items
  • Docker images: Self-contained executable environments (public on Docker Hub)
  • Scaffold configs: Claude Code / Kilo / Droid configurations

🐳 Docker Environments

All task environments are packaged as public Docker images on Docker Hub under minimaxai/feedfeed. You can pull and inspect any environment:

# Pull an environment image
docker pull minimaxai/feedfeed:md_course_builder

# Explore the workspace
docker run -it --rm minimaxai/feedfeed:md_course_builder /bin/bash

Each image contains:

  • Source code repository at /workspace/<project>
  • Project documentation (CLAUDE.md, AGENTS.md, etc.) with coding conventions
  • Pre-installed dependencies for running tests and builds

📊 Dataset Statistics

Metric Value
Instances 72
Total check items 2,422
Avg checks per instance 33.6
Unique environments 34

By Primary Category (the main instruction source being tested):

Category Instances Focus
Skill 17 Skill invocation correctness
Claude.md 15 Project documentation compliance
AGENTS.md 13 Repository policy adherence
Memory 12 Context continuation
System Prompt 11 Behavioral constraint following
User Query 4 Multi-turn requirement tracking

By Scaffold:

Scaffold Instances Description
Claude Code 54 Anthropic's agentic coding tool
Kilo 11 Open-source VS Code extension
Droid 7 Factory.ai's software delivery platform

📝 Data Format

Each instance is a JSON object with the following fields:

{
  "instance_id": "md-course-builder-conventional-commits",
  "user_query": ["Implement the feature as specified..."],
  "system_prompt": "You are a CLI assistant...",
  "category": "Claude.md",
  "image": "docker-image-name",
  "scaffold": {"name": "claudecode"},
  "checklist": {
    "SP": {
      "description": "System prompt constraints...",
      "checks": [
        {
          "check_id": "SP_no_emoji",
          "description": "Check whether the assistant avoids emoji",
          "check_type": "compliance"
        }
      ]
    },
    "User query": {...}
  }
}
Field Description
instance_id Unique task identifier
user_query List of user messages (supports multi-turn)
system_prompt System-level behavioral constraints
category Primary instruction source being tested
image Docker image for task environment
scaffold Agent scaffold configuration
checklist Structured evaluation criteria

💻 Usage

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("MiniMaxAI/OctoCodingBench")

# Filter by category
skill_tasks = [d for d in dataset["train"] if d["category"] == "Skill"]

# Filter by scaffold
claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "claudecode"]

⚖️ Evaluation Metrics

Metric Definition What it measures
ISR (Instance Success Rate) 1 if ALL checks pass, 0 otherwise End-to-end compliance — did the agent follow every rule
CSR (Checklist Success Rate) Passed checks / Total checks Fine-grained compliance — what proportion of rules were followed

🏆 Leaderboard

Model ISR (%)
Claude Opus 4.5 36.2
MiniMax-M2.1 26.1
DeepSeek V3.2 26.0
Gemini 3 Pro 22.9
Claude Sonnet 4.5 22.8
MiniMax-M2 13.3

📜 Citation

@misc{octocodingbench2026,
  title={OctoCodingBench: Instruction-Following Benchmark for Coding Agents},
  author={MiniMax},
  year={2026},
  publisher={Hugging Face}
}