Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,83 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CORE: Benchmarking LLMs’ Code Reasoning Capabilities through Static Analysis Tasks
|
| 2 |
+
|
| 3 |
+
|
| 4 |
+
This repository hosts the **CoRe** benchmark, designed to evaluate the reasoning capabilities of large language models on **program analysis tasks** including data dependency, control dependency, and information flow. Each task instance is represented as a structured JSON object with detailed metadata for evaluation and reproduction.
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
Each example is a JSON object with the following fields:
|
| 8 |
+
|
| 9 |
+
```json
|
| 10 |
+
{
|
| 11 |
+
"label_file": "codenet_p00496_s700056700_main_12_40.yaml",
|
| 12 |
+
"code_file": "codenet_p00496_s700056700_main_12_40.c",
|
| 13 |
+
"pid": "p00496",
|
| 14 |
+
"sid": "s700056700",
|
| 15 |
+
"funname": "main",
|
| 16 |
+
"start": 12,
|
| 17 |
+
"end": 40,
|
| 18 |
+
"dataset": "codenet",
|
| 19 |
+
"language": "C",
|
| 20 |
+
"src": 30,
|
| 21 |
+
"dst": 33,
|
| 22 |
+
"groundtruth": true,
|
| 23 |
+
"groundtruth_trace": [[30, 31, 33]],
|
| 24 |
+
"task_id": "control_codenet_p00496_s700056700_main_12_40_k_33_1",
|
| 25 |
+
"prompt": "..."
|
| 26 |
+
"category": answer/trace/all_source
|
| 27 |
+
}
|
| 28 |
+
```
|
| 29 |
+
|
| 30 |
+
### 🏷 Category Field
|
| 31 |
+
|
| 32 |
+
The `category` field specifies the type of prompt associated with each task instance:
|
| 33 |
+
|
| 34 |
+
* **answer**: The prompt asks the model to provide a binary response (`yes`/`no`) indicating whether the specified dependency exists.
|
| 35 |
+
* **trace**: The prompt asks the model to produce a dependency trace if the answer is `yes` (e.g., the control or data dependency exists).
|
| 36 |
+
* **all\_source**: The prompt asks the model to enumerate all source elements involved in the dependency.
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
## 🧩 Field Descriptions
|
| 40 |
+
|
| 41 |
+
| Field | Description |
|
| 42 |
+
|------------------|-------------|
|
| 43 |
+
| `label_file` | Path to the YAML file containing ground truth annotations for the current task instance. |
|
| 44 |
+
| `code_file` | Path to the corresponding C/Java/Python source code file. |
|
| 45 |
+
| `pid` | Problem ID from the original source dataset (e.g., CodeNet or GCJ). |
|
| 46 |
+
| `sid` | Solution ID identifying the specific program implementation. |
|
| 47 |
+
| `funname` | Name of the target function in which the analysis is conducted. |
|
| 48 |
+
| `start`, `end` | Line numbers defining the start and end of the target function. |
|
| 49 |
+
| `dataset` | Original dataset source (`codenet` or `gcj`). |
|
| 50 |
+
| `language` | Programming language of the source file (`C`, `Java`, `Python`). |
|
| 51 |
+
| `src`, `dst` | Defines the two program elements queried in this task. In control dependency, these are line numbers. In data dependency and information flow, they are structured as `["varname", line_no]`, representing variable instances. |
|
| 52 |
+
| `groundtruth` | Boolean indicating whether the specified dependency relationship holds (i.e., true if `src` has the given dependency on `dst`). |
|
| 53 |
+
| `groundtruth_trace` | The ground truth trace (as a list of line numbers) that represents a valid transitive chain of direct dependencies. This is only used for **control dependency** tasks. For **data dependency** and **information flow**, this is set to `null` since there can be theoretically infinite number of valid traces. Evaluation is instead conducted by validating the predicted trace against the code’s static dependency structure.|
|
| 54 |
+
| `task_id` | A unique ID for the task instance. The prefix (`control_`, `data_`, `infoflow_`) identifies the task type. |
|
| 55 |
+
| `prompt` | The prompt string used in the experiment for this task instance. It includes the instruction, examples, query, and code context provided to the LLM. Content-specific fields (e.g., source/target names, line numbers) are filled into a standardized prompt template. |
|
| 56 |
+
|
| 57 |
+
## 📚 Task Types
|
| 58 |
+
|
| 59 |
+
The benchmark contains three types of program reasoning tasks:
|
| 60 |
+
|
| 61 |
+
- `control`: Control dependency between lines.
|
| 62 |
+
- `data`: Data dependency between variables.
|
| 63 |
+
- `infoflow`: Information flow (explicit or implicit) between variables.
|
| 64 |
+
|
| 65 |
+
Each instance is designed to assess whether an LLM can understand and reason over static semantics in real-world source code.
|
| 66 |
+
|
| 67 |
+
## 🛠 Scripts and Usage
|
| 68 |
+
|
| 69 |
+
For scripts, evaluation tools, and detailed instructions on running inference over CoRe, please check out our companion GitHub repository:
|
| 70 |
+
|
| 71 |
+
🔗 [https://github.com/researchartifact1234/CoRe](https://github.com/researchartifact1234/CoRe)
|
| 72 |
+
|
| 73 |
+
This includes:
|
| 74 |
+
|
| 75 |
+
- Raw annotation data that could be used to generate various static analysis tasks
|
| 76 |
+
- Predefined prompts for each task and language
|
| 77 |
+
- Scripts for invoking models and parsing responses
|
| 78 |
+
- Evaluation scripts for dependency classification, trace generation, and dependency source enumeration
|
| 79 |
+
|
| 80 |
+
|
| 81 |
+
### 📄 License
|
| 82 |
+
|
| 83 |
+
Apache License 2.0
|