Datasets:

Languages:
English
ArXiv:
License:
danningx commited on
Commit
76e1f1e
·
verified ·
1 Parent(s): 69b8c03

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -20,7 +20,6 @@ Each example is a JSON object with the following fields:
20
  "src": 30,
21
  "dst": 33,
22
  "groundtruth": true,
23
- "groundtruth_trace": [[30, 31, 33]],
24
  "task_id": "control_codenet_p00496_s700056700_main_12_40_k_33_1",
25
  "prompt": "..."
26
  "category": trace/all_source
@@ -49,7 +48,6 @@ The `category` field specifies the type of prompt associated with each task inst
49
  | `language` | Programming language of the source file (`C`, `Java`, `Python`). |
50
  | `src`, `dst` | Defines the two program elements queried in this task. In control dependency, these are line numbers. In data dependency and information flow, they are structured as `["varname", line_no]`, representing variable instances. |
51
  | `groundtruth` | Boolean indicating whether the specified dependency relationship holds (i.e., true if `src` has the given dependency on `dst`). |
52
- | `groundtruth_trace` | The ground truth trace (as a list of line numbers) that represents a valid transitive chain of direct dependencies. This is only used for **control dependency** tasks. For **data dependency** and **information flow**, this is set to `null` since there can be theoretically infinite number of valid traces. Evaluation is instead conducted by validating the predicted trace against the code’s static dependency structure.|
53
  | `task_id` | A unique ID for the task instance. The prefix (`control_`, `data_`, `infoflow_`) identifies the task type. |
54
  | `prompt` | The prompt string used in the experiment for this task instance. It includes the instruction, examples, query, and code context provided to the LLM. Content-specific fields (e.g., source/target names, line numbers) are filled into a standardized prompt template. |
55
 
@@ -73,7 +71,7 @@ For scripts, evaluation tools, and detailed instructions on running inference ov
73
 
74
  🔗 Paper: [https://arxiv.org/abs/2507.05269](https://arxiv.org/abs/2507.05269)
75
 
76
- This includes:
77
 
78
  - Raw annotation data that could be used to generate various static analysis tasks
79
  - Predefined prompts for each task and language
 
20
  "src": 30,
21
  "dst": 33,
22
  "groundtruth": true,
 
23
  "task_id": "control_codenet_p00496_s700056700_main_12_40_k_33_1",
24
  "prompt": "..."
25
  "category": trace/all_source
 
48
  | `language` | Programming language of the source file (`C`, `Java`, `Python`). |
49
  | `src`, `dst` | Defines the two program elements queried in this task. In control dependency, these are line numbers. In data dependency and information flow, they are structured as `["varname", line_no]`, representing variable instances. |
50
  | `groundtruth` | Boolean indicating whether the specified dependency relationship holds (i.e., true if `src` has the given dependency on `dst`). |
 
51
  | `task_id` | A unique ID for the task instance. The prefix (`control_`, `data_`, `infoflow_`) identifies the task type. |
52
  | `prompt` | The prompt string used in the experiment for this task instance. It includes the instruction, examples, query, and code context provided to the LLM. Content-specific fields (e.g., source/target names, line numbers) are filled into a standardized prompt template. |
53
 
 
71
 
72
  🔗 Paper: [https://arxiv.org/abs/2507.05269](https://arxiv.org/abs/2507.05269)
73
 
74
+ The github repo includes:
75
 
76
  - Raw annotation data that could be used to generate various static analysis tasks
77
  - Predefined prompts for each task and language