VittorioRossi's picture
Upload folder using huggingface_hub
709271e verified
metadata
license: apache-2.0
task_categories:
  - question-answering
  - text-generation
language:
  - en
  - code
tags:
  - code
  - call-graph
  - reasoning
  - benchmark
  - software-engineering
  - agentic
  - python
  - typescript
size_categories:
  - n<1K

GraphCode-Bench-500-v0

GraphCode-Bench is a benchmark for evaluating LLMs on call-graph reasoning — given a function in a real-world repository, can a model identify which functions call it (upstream) or which functions it calls (downstream), across 1 and 2 hops?

Models are evaluated agentically: they receive read-only filesystem tools (list_directory, read_file, search_in_file) and up to 10 turns to explore the codebase before producing an answer.

Dataset summary

Split Records Repos Languages
train (bench500) 483 22 Python, TypeScript

Stratification: 5 repos × 2 question types (upstream/downstream) × 2 hop depths (1-hop/2-hop).

Task definition

Each record contains:

  • anchor: a named function in a real open-source repository
  • question_type: upstream (who calls this?) or downstream (what does this call?)
  • hop_depth: 1 (direct callers/callees) or 2 (one level further)
  • gold: the ground-truth set of function names at each hop level (extracted via LSP)

Models must enumerate the correct function names. Scoring uses set F1 against the gold answer.

Record schema

{
  "sample_id": "psf__requests__send__upstream__1hop_abc123",
  "repo": "psf/requests",
  "question_type": "upstream",
  "hop_depth": 1,
  "gold": {
    "hop_1": ["mount", "request"],
    "hop_1_files": ["requests/sessions.py"]
  },
  "metadata": {
    "anchor": "send",
    "anchor_file": "requests/adapters.py",
    "anchor_source": "def send(self, request, ...):",
    "result_size": 4,
    "created_at": "2026-03-20T16:58:18.104721+00:00",
    "file_content": "..."
  }
}

Repositories included

Python (250 samples): psf/requests, pallets/flask, pallets/click, scrapy/scrapy, celery/celery, encode/httpx, pytest-dev/pytest, psf/black, PyCQA/flake8, rq/rq, paramiko/paramiko

TypeScript (233 samples): sindresorhus/got, colinhacks/zod, trpc/trpc, immerjs/immer, node-fetch/node-fetch

Pipeline

Ground truth is extracted by:

  1. Running basedpyright / typescript-language-server over each repo via LSP
  2. Walking call edges from the anchor to the requested depth
  3. Applying 15 quality filters (no builtins, no generics, minimum result size, etc.)

See the companion paper for full pipeline details.

Evaluation results (v0)

Model F1 EM Pass@0.5 Avg Turns
GPT-5.4-nano (API)† 0.364 0.170 0.400 6.19
Qwen3-Coder-30B-A3B 0.351 0.126 0.369 7.29
GPT-OSS-20B 0.313 0.116 0.362 7.72
Mistral-Small-24B 0.199 0.066 0.211 5.05

† Closed model, shown for reference. Open-weight models evaluated via vLLM on HPC cluster.

Key finding: 2-hop questions are 3–4× harder than 1-hop (Qwen3: F1=0.546 at 1-hop vs 0.151 at 2-hop).

Citation

@misc{graphcodebench2026,
  title   = {GraphCode-Bench: Evaluating LLMs on Agentic Call-Graph Reasoning},
  author  = {Rossi, Vittorio},
  year    = {2026},
  url     = {https://huggingface.co/datasets/VittorioRossi/GraphCode-Bench-500-v0}
}

License

Apache 2.0. The source code snippets included in anchor_source and file_content fields are derived from their respective open-source repositories under their original licenses.