SWE-bench_Complex / README.md
anthonypjshaw's picture
Upload README.md with huggingface_hub
cfbb122 verified
metadata
dataset_info:
  features:
    - name: instance_id
      dtype: string
    - name: repo
      dtype: string
    - name: base_commit
      dtype: string
    - name: problem_statement
      dtype: string
    - name: test_patch
      dtype: string
    - name: human_patch
      dtype: string
    - name: pr_number
      dtype: int64
    - name: pr_url
      dtype: string
    - name: pr_merged_at
      dtype: string
    - name: issue_number
      dtype: int64
    - name: issue_url
      dtype: string
    - name: human_changed_lines
      dtype: int64
    - name: FAIL_TO_PASS
      dtype: string
    - name: PASS_TO_PASS
      dtype: string
    - name: version
      dtype: string
  splits:
    - name: test
      num_examples: 119
license: mit
task_categories:
  - other
language:
  - en
tags:
  - code-generation
  - software-engineering
  - complexity
  - swe-bench
  - contamination-free
  - post-training-cutoff
pretty_name: SWE-bench Complex
size_categories:
  - n<1K

SWE-bench Complex

A contamination-free, complexity-focused evaluation set for AI coding agents.

SWE-bench Complex is a curated dataset of 119 real-world GitHub issues from major Python open-source projects, designed specifically for studying code complexity in AI-generated patches. All tasks were merged between January–March 2026, guaranteeing they postdate the training cutoff of current frontier models.

Why SWE-bench Complex?

Existing benchmarks like SWE-bench Verified suffer from two problems for complexity research:

1. Data Contamination

Over 94% of SWE-bench issues predate current LLM training cutoffs. Aleithan et al. found that 32.67% of successful patches involve "cheating" through solution leakage, and resolution rates dropped from 12.47% to 3.97% when leaked instances were filtered out (SWE-bench+, 2024).

All SWE-bench Complex instances postdate the training cutoffs of:

Model Provider Training Cutoff Gap
Claude Opus 4.6 Anthropic Oct 2025 3+ months
GPT-5.3-Codex OpenAI Sep 2025 4+ months
GPT-5.4 OpenAI Nov 2025 2+ months
Gemini 3.1 Pro Google Oct 2025 3+ months

2. Trivial Patches

SWE-bench Verified has a median patch size of just 7 changed lines — 44.6% of tasks require only 1–5 lines. These trivial patches yield near-zero complexity deltas, reducing statistical power for quality studies.

SWE-bench Complex targets substantive patches with a median of 48 changed lines — 6.9× larger than SWE-bench Verified.

Dataset Comparison

Characteristic SWE-bench Verified SWE-bench Complex
Tasks 500 119
Repositories 12 8
Median changed lines 7 48
Mean changed lines 14.3 74.9
Mean Python files changed 1.2 3.9
Human ΔCC (mean) +1.14 +4.06
Human ΔLLOC (mean) +2.77 +19.08
Human ΔMI (mean) −0.230 −0.417
Human ΔCogC (mean) N/A +3.63
Post-training-cutoff <6% 100%

Complexity metrics measured using Wily v2:

  • ΔCC: Cyclomatic Complexity change (McCabe, 1976)
  • ΔLLOC: Logical Lines of Code change
  • ΔMI: Maintainability Index change (Oman & Hagemeister, 1992)
  • ΔCogC: Cognitive Complexity change (Campbell, 2018)

Repository Distribution

Repository Instances
django/django 38
astropy/astropy 22
pydata/xarray 17
scikit-learn/scikit-learn 14
pylint-dev/pylint 10
matplotlib/matplotlib 9
sympy/sympy 8
pallets/flask 1

Selection Criteria

Instances were collected from merged pull requests in the SWE-bench ecosystem repositories with the following filters:

  1. Date range: Merged January 1 – March 10, 2026 (post-training-cutoff)
  2. Issue linkage: PR explicitly references a GitHub issue via "fixes #N" or equivalent
  3. Test coverage: PR includes both implementation and test changes to Python files
  4. Minimum complexity: Implementation patch modifies ≥4 changed lines
  5. Python files: Only .py file changes retained
  6. Manual review: Each candidate reviewed for solvability — documentation-only changes, large-scale refactors (>300 lines or >10 files), and tasks requiring external domain knowledge were excluded

From 1,043 scraped PRs → 712 with issue references → 224 after automated filters → 119 after manual review.

Schema

Each instance contains:

Field Type Description
instance_id string Unique identifier ({owner}__{repo}-{pr_number})
repo string GitHub repository (owner/repo)
base_commit string Parent commit SHA
problem_statement string GitHub issue text (title + body)
test_patch string Unified diff of test-file changes
human_patch string Unified diff of implementation-file changes
pr_number int Pull request number
pr_url string Pull request URL
pr_merged_at string Merge timestamp (ISO 8601)
issue_number int Referenced issue number
issue_url string Issue URL
human_changed_lines int Total changed lines in the human patch
FAIL_TO_PASS string JSON array of test IDs that must go FAIL→PASS
PASS_TO_PASS string JSON array of test IDs that must remain PASS

SWE-bench Compatibility

SWE-bench Complex uses the same schema as SWE-bench Verified and can be evaluated using the standard SWE-bench harness:

python -m swebench.harness.run_evaluation \
    -d anthonypjshaw/SWE-bench_Complex \
    -s test \
    -p predictions.jsonl \
    -id my_run \
    --max_workers 4

Usage

from datasets import load_dataset

dataset = load_dataset("anthonypjshaw/SWE-bench_Complex", split="test")
print(f"Tasks: {len(dataset)}")
print(f"Repos: {len(set(dataset['repo']))}")

Citation

If you use SWE-bench Complex in your research, please cite:

@inproceedings{Shaw2026SWEbenchComplex,
  author    = {Shaw, Anthony},
  title     = {Beyond the Benchmark: A Contamination-Free Study of {AI} Code Complexity Across Four Frontier Models},
  booktitle = {Proceedings of the IEEE International Conference on Software Engineering (SSE)},
  year      = {2026},
}

License

MIT License. The dataset contains references to publicly available open-source code under their respective licenses.