VittorioRossi commited on
Commit
709271e
·
verified ·
1 Parent(s): 432b0de

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +111 -0
  2. bench100.jsonl +0 -0
  3. bench500_balanced.jsonl +0 -0
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - question-answering
5
+ - text-generation
6
+ language:
7
+ - en
8
+ - code
9
+ tags:
10
+ - code
11
+ - call-graph
12
+ - reasoning
13
+ - benchmark
14
+ - software-engineering
15
+ - agentic
16
+ - python
17
+ - typescript
18
+ size_categories:
19
+ - n<1K
20
+ ---
21
+
22
+ # GraphCode-Bench-500-v0
23
+
24
+ **GraphCode-Bench** is a benchmark for evaluating LLMs on *call-graph reasoning* — given a function in a real-world repository, can a model identify which functions call it (upstream) or which functions it calls (downstream), across 1 and 2 hops?
25
+
26
+ Models are evaluated **agentically**: they receive read-only filesystem tools (`list_directory`, `read_file`, `search_in_file`) and up to 10 turns to explore the codebase before producing an answer.
27
+
28
+ ## Dataset summary
29
+
30
+ | Split | Records | Repos | Languages |
31
+ |-------|---------|-------|-----------|
32
+ | `train` (bench500) | 483 | 22 | Python, TypeScript |
33
+
34
+ **Stratification**: 5 repos × 2 question types (upstream/downstream) × 2 hop depths (1-hop/2-hop).
35
+
36
+ ## Task definition
37
+
38
+ Each record contains:
39
+
40
+ - **anchor**: a named function in a real open-source repository
41
+ - **question_type**: `upstream` (who calls this?) or `downstream` (what does this call?)
42
+ - **hop_depth**: `1` (direct callers/callees) or `2` (one level further)
43
+ - **gold**: the ground-truth set of function names at each hop level (extracted via LSP)
44
+
45
+ Models must enumerate the correct function names. Scoring uses **set F1** against the gold answer.
46
+
47
+ ## Record schema
48
+
49
+ ```json
50
+ {
51
+ "sample_id": "psf__requests__send__upstream__1hop_abc123",
52
+ "repo": "psf/requests",
53
+ "question_type": "upstream",
54
+ "hop_depth": 1,
55
+ "gold": {
56
+ "hop_1": ["mount", "request"],
57
+ "hop_1_files": ["requests/sessions.py"]
58
+ },
59
+ "metadata": {
60
+ "anchor": "send",
61
+ "anchor_file": "requests/adapters.py",
62
+ "anchor_source": "def send(self, request, ...):",
63
+ "result_size": 4,
64
+ "created_at": "2026-03-20T16:58:18.104721+00:00",
65
+ "file_content": "..."
66
+ }
67
+ }
68
+ ```
69
+
70
+ ## Repositories included
71
+
72
+ **Python** (250 samples): `psf/requests`, `pallets/flask`, `pallets/click`, `scrapy/scrapy`, `celery/celery`, `encode/httpx`, `pytest-dev/pytest`, `psf/black`, `PyCQA/flake8`, `rq/rq`, `paramiko/paramiko`
73
+
74
+ **TypeScript** (233 samples): `sindresorhus/got`, `colinhacks/zod`, `trpc/trpc`, `immerjs/immer`, `node-fetch/node-fetch`
75
+
76
+ ## Pipeline
77
+
78
+ Ground truth is extracted by:
79
+ 1. Running [basedpyright](https://github.com/DetachHead/basedpyright) / typescript-language-server over each repo via LSP
80
+ 2. Walking call edges from the anchor to the requested depth
81
+ 3. Applying 15 quality filters (no builtins, no generics, minimum result size, etc.)
82
+
83
+ See the companion paper for full pipeline details.
84
+
85
+ ## Evaluation results (v0)
86
+
87
+ | Model | F1 | EM | Pass@0.5 | Avg Turns |
88
+ |-------|----|----|----------|-----------|
89
+ | GPT-5.4-nano (API)† | 0.364 | 0.170 | 0.400 | 6.19 |
90
+ | Qwen3-Coder-30B-A3B | 0.351 | 0.126 | 0.369 | 7.29 |
91
+ | GPT-OSS-20B | 0.313 | 0.116 | 0.362 | 7.72 |
92
+ | Mistral-Small-24B | 0.199 | 0.066 | 0.211 | 5.05 |
93
+
94
+ † Closed model, shown for reference. Open-weight models evaluated via vLLM on HPC cluster.
95
+
96
+ **Key finding**: 2-hop questions are 3–4× harder than 1-hop (Qwen3: F1=0.546 at 1-hop vs 0.151 at 2-hop).
97
+
98
+ ## Citation
99
+
100
+ ```bibtex
101
+ @misc{graphcodebench2026,
102
+ title = {GraphCode-Bench: Evaluating LLMs on Agentic Call-Graph Reasoning},
103
+ author = {Rossi, Vittorio},
104
+ year = {2026},
105
+ url = {https://huggingface.co/datasets/VittorioRossi/GraphCode-Bench-500-v0}
106
+ }
107
+ ```
108
+
109
+ ## License
110
+
111
+ Apache 2.0. The source code snippets included in `anchor_source` and `file_content` fields are derived from their respective open-source repositories under their original licenses.
bench100.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
bench500_balanced.jsonl ADDED
The diff for this file is too large to render. See raw diff