--- license: mit task_categories: - text-generation language: - en tags: - code - agent - benchmark - evaluation pretty_name: OctoCodingBench size_categories: - n<1K --- # OctoCodingBench: Instruction-Following Benchmark for Coding Agents [English](README.md) | [δΈ­ζ–‡](README_CN.md) ## 🌟 Overview **OctoCodingBench** is a comprehensive benchmark for evaluating how well AI coding agents follow instructions from multiple sources. Unlike existing benchmarks that focus solely on task completion, OctoCodingBench systematically tests whether agents respect constraints from: - **System Prompts (SP)** β€” Role definitions, output formats, workflow rules - **System Reminders** β€” Behavior correction, tool usage reminders, information confidentiality - **User Queries** β€” Task requirements, multi-turn instruction changes - **Project Documentation (Agents.md)** β€” Coding conventions from `CLAUDE.md`, `AGENTS.md` - **Skills** β€” Skill invocation workflows and protocols - **Memory** β€” User preferences and project context continuation - **Tool Schema** β€” Parameter correctness, call sequence, no hallucinated results ## πŸš€ Key Features - **Multi-Source Instruction Evaluation**: Tests agent compliance across 7 distinct instruction categories - **Checklist-Based Scoring**: Each instance includes a structured checklist with binary-decidable checks - **Real-World Scenarios**: Tasks derived from actual development workflows - **Multi-Scaffold Support**: Evaluated across Claude Code, Kilo, and Droid environments ## πŸ“¦ Dataset Contents This release contains **72 curated instances** with: - Natural language task specifications - System prompts with behavioral constraints - Structured evaluation checklists (2,422 total check items) - Category and scaffold metadata ## πŸ“Š Dataset Statistics | Category | Instances | |----------|-----------| | Skill | 17 | | Claude.md | 15 | | AGENTS.md | 13 | | Memory | 12 | | System Prompt | 11 | | User Query | 4 | | **Total** | **72** | | Scaffold | Instances | |----------|-----------| | Claude Code | 54 | | Kilo | 11 | | Droid | 7 | | Metric | Value | |--------|-------| | Total check items | 2,422 | | Avg checks per instance | 33.6 | ## πŸ“ Data Format Each instance is a JSON object with the following fields: ```json { "instance_id": "md-course-builder-conventional-commits", "user_query": ["Implement the feature as specified..."], "system_prompt": "You are a CLI assistant...", "category": "Claude.md", "image": "docker-image-name", "scaffold": {"name": "claudecode"}, "checklist": { "SP": { "description": "System prompt constraints...", "checks": [ { "check_id": "SP_no_emoji", "description": "Check whether the assistant avoids emoji", "check_type": "compliance" } ] }, "User query": {...} } } ``` | Field | Description | |-------|-------------| | `instance_id` | Unique task identifier | | `user_query` | List of user messages (supports multi-turn) | | `system_prompt` | System-level behavioral constraints | | `category` | Primary instruction source being tested | | `image` | Docker image for task environment | | `scaffold` | Agent scaffold configuration | | `checklist` | Structured evaluation criteria | ## πŸ’» Usage ```python from datasets import load_dataset # Load the dataset dataset = load_dataset("MiniMaxAI/OctoCodingBench") # Filter by category skill_tasks = [d for d in dataset["train"] if d["category"] == "Skill"] # Filter by scaffold claudecode_tasks = [d for d in dataset["train"] if d["scaffold"]["name"] == "claudecode"] ``` ## βš–οΈ Evaluation Metrics - **ISR (Instance Success Rate)**: 1 if all checks pass, 0 otherwise - **CSR (Checklist Success Rate)**: Proportion of passed checks ## πŸ“œ Citation ```bibtex @misc{octocodingbench2026, title={OctoCodingBench: Instruction-Following Benchmark for Coding Agents}, author={MiniMax}, year={2026}, publisher={Hugging Face} } ```