OctoCodingBench / README.md
Yang
Upload 3 files
ff44132 verified
|
raw
history blame
4.02 kB
metadata
license: mit
task_categories:
  - text-generation
language:
  - en
tags:
  - code
  - agent
  - benchmark
  - evaluation
pretty_name: OctoCodingBench
size_categories:
  - n<1K

OctoCodingBench: Instruction-Following Benchmark for Coding Agents

English | 中文

🌟 Overview

OctoCodingBench is a comprehensive benchmark for evaluating how well AI coding agents follow instructions from multiple sources. Unlike existing benchmarks that focus solely on task completion, OctoCodingBench systematically tests whether agents respect constraints from:

  • System Prompts (SP) — Role definitions, output formats, workflow rules
  • System Reminders — Behavior correction, tool usage reminders, information confidentiality
  • User Queries — Task requirements, multi-turn instruction changes
  • Project Documentation (Agents.md) — Coding conventions from CLAUDE.md, AGENTS.md
  • Skills — Skill invocation workflows and protocols
  • Memory — User preferences and project context continuation
  • Tool Schema — Parameter correctness, call sequence, no hallucinated results

🚀 Key Features

  • Multi-Source Instruction Evaluation: Tests agent compliance across 7 distinct instruction categories
  • Checklist-Based Scoring: Each instance includes a structured checklist with binary-decidable checks
  • Real-World Scenarios: Tasks derived from actual development workflows
  • Multi-Scaffold Support: Evaluated across Claude Code, Kilo, and Droid environments

📦 Dataset Contents

This release contains 72 curated instances with:

  • Natural language task specifications
  • System prompts with behavioral constraints
  • Structured evaluation checklists (2,422 total check items)
  • Category and scaffold metadata

📊 Dataset Statistics

Category Instances
Skill 17
Claude.md 15
AGENTS.md 13
Memory 12
System Prompt 11
User Query 4
Total 72
Scaffold Instances
Claude Code 54
Kilo 11
Droid 7
Metric Value
Total check items 2,422
Avg checks per instance 33.6

📝 Data Format

Each instance is a JSON object with the following fields:

{
  "instance_id": "md-course-builder-conventional-commits",
  "user_query": ["Implement the feature as specified..."],
  "system_prompt": "You are a CLI assistant...",
  "category": "Claude.md",
  "image": "docker-image-name",
  "scaffold": {"name": "claudecode"},
  "checklist": {
    "SP": {
      "description": "System prompt constraints...",
      "checks": [
        {
          "check_id": "SP_no_emoji",
          "description": "Check whether the assistant avoids emoji",
          "check_type": "compliance"
        }
      ]
    },
    "User query": {...}
  }
}
Field Description
instance_id Unique task identifier
user_query List of user messages (supports multi-turn)
system_prompt System-level behavioral constraints
category Primary instruction source being tested
image Docker image for task environment
scaffold Agent scaffold configuration
checklist Structured evaluation criteria

💻 Usage

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("MiniMaxAI/OctoCodingBench")

# Filter by category
skill_tasks = [d for d in dataset["train"] if d["category"] == "Skill"]

# Filter by scaffold
claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "claudecode"]

⚖️ Evaluation Metrics

  • ISR (Instance Success Rate): 1 if all checks pass, 0 otherwise
  • CSR (Checklist Success Rate): Proportion of passed checks

📜 Citation

@misc{octocodingbench2026,
  title={OctoCodingBench: Instruction-Following Benchmark for Coding Agents},
  author={MiniMax},
  year={2026},
  publisher={Hugging Face}
}