Datasets:

Languages:
English
ArXiv:
License:
File size: 4,264 Bytes
01c42d2
 
 
 
 
 
 
 
 
 
 
 
 
93f6641
ced758a
 
 
 
6d2437b
ced758a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e61a1bb
ced758a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fd6fd35
93f6641
fd6fd35
ced758a
69b8c03
93f6641
76e1f1e
ced758a
 
 
 
 
 
 
 
 
01c42d2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- LLM4code
- code_reasoning
- neurips25
size_categories:
- 10K<n<100K
---
# CoRe: Benchmarking LLMs’ Code Reasoning Capabilities through Static Analysis Tasks


This repository hosts the **CoRe** benchmark, designed to evaluate the reasoning capabilities of large language models on **program analysis tasks** including data dependency, control dependency, and information flow. Each task instance is represented as a structured JSON object with detailed metadata for evaluation and reproduction.

It contains 25k data points (last update: Sep. 24th, 2025).

Each example is a JSON object with the following fields:

```json
{
  "label_file": "codenet_p00496_s700056700_main_12_40.yaml",
  "code_file": "codenet_p00496_s700056700_main_12_40.c",
  "pid": "p00496",
  "sid": "s700056700",
  "funname": "main",
  "start": 12,
  "end": 40,
  "dataset": "codenet",
  "language": "C",
  "src": 30,
  "dst": 33,
  "groundtruth": true,
  "task_id": "control_codenet_p00496_s700056700_main_12_40_k_33_1",
  "prompt": "..."
  "category": trace/all_source
}
```

### 🏷 Category Field

The `category` field specifies the type of prompt associated with each task instance:

* **trace**: The prompt asks the model to produce a dependency trace if the answer is `yes` (e.g., the control or data dependency exists).
* **all\_source**: The prompt asks the model to enumerate all source elements involved in the dependency.


## 🧩 Field Descriptions

| Field             | Description |
|------------------|-------------|
| `label_file`      | Path to the YAML file containing ground truth annotations for the current task instance. |
| `code_file`       | Path to the corresponding C/Java/Python source code file. |
| `pid`             | Problem ID from the original source dataset (e.g., CodeNet or GCJ). |
| `sid`             | Solution ID identifying the specific program implementation. |
| `funname`         | Name of the target function in which the analysis is conducted. |
| `start`, `end`    | Line numbers defining the start and end of the target function. |
| `dataset`         | Original dataset source (`codenet` or `gcj`). |
| `language`        | Programming language of the source file (`C`, `Java`, `Python`). |
| `src`, `dst`      | Defines the two program elements queried in this task. In control dependency, these are line numbers. In data dependency and information flow, they are structured as `["varname", line_no]`, representing variable instances. |
| `groundtruth`     | Boolean indicating whether the specified dependency relationship holds (i.e., true if `src` has the given dependency on `dst`). |
| `task_id`         | A unique ID for the task instance. The prefix (`control_`, `data_`, `infoflow_`) identifies the task type. |
| `prompt`          | The prompt string used in the experiment for this task instance. It includes the instruction, examples, query, and code context provided to the LLM. Content-specific fields (e.g., source/target names, line numbers) are filled into a standardized prompt template. |

## 📚 Task Types

The benchmark contains three types of program reasoning tasks:

- `control`: Control dependency between lines.
- `data`: Data dependency between variables.
- `infoflow`: Information flow (explicit or implicit) between variables.

Each instance is designed to assess whether an LLM can understand and reason over static semantics in real-world source code.

## 🛠 Scripts and Usage

For scripts, evaluation tools, and detailed instructions on running inference over CoRe, please check out our companion GitHub repository:

🔗 Website: [https://corebench.github.io/](https://corebench.github.io/)

🔗 Source code: [https://github.com/CoReBench/CoRe](https://github.com/CoReBench/CoRe)

🔗 Paper: [https://arxiv.org/abs/2507.05269](https://arxiv.org/abs/2507.05269)

The github repo includes:

- Raw annotation data that could be used to generate various static analysis tasks
- Predefined prompts for each task and language
- Scripts for invoking models and parsing responses
- Evaluation scripts for dependency classification, trace generation, and dependency source enumeration


### 📄 License

Apache License 2.0