Datasets:
Tasks:
Text Classification
Modalities:
Text
Languages:
code
Size:
10K - 100K
ArXiv:
Tags:
code-comprehension
llm-evaluation
software-metrics
input-output-prediction
code-understanding
benchmark
License:
Updated Readme
Browse files
README.md
CHANGED
|
@@ -41,7 +41,7 @@ dataset_info:
|
|
| 41 |
|
| 42 |
Dataset for the paper **"Beyond Accuracy: Characterizing Code Comprehension Capabilities in (Large) Language Models"** by Machtle, Serr, Loose & Eisenbarth (University of Luebeck).
|
| 43 |
|
| 44 |
-
[[Paper]](https://arxiv.org/abs/2601.12951) | [[Code]](https://github.com/
|
| 45 |
|
| 46 |
## Task
|
| 47 |
|
|
@@ -68,7 +68,6 @@ print(f"Cyclomatic complexity: {sample['metric_cyclomatic_complexity']}")
|
|
| 68 |
|
| 69 |
| | |
|
| 70 |
|---|---|
|
| 71 |
-
| **Samples** | 12,584 |
|
| 72 |
| **Columns** | 249 |
|
| 73 |
| **Source** | Python subset of [Project CodeNet](https://github.com/IBM/Project_CodeNet) |
|
| 74 |
| **I/O generation** | Type-aware fuzzing with hill-climbing type inference |
|
|
@@ -102,15 +101,6 @@ Per-model results from the binary I/O consistency evaluation. Each model has 3 c
|
|
| 102 |
| `llm_{model}_num_correct` | int | Number of test cases answered correctly (out of `num_total`) |
|
| 103 |
| `llm_{model}_num_total` | int | Total test cases for this sample (typically 2: one correct, one incorrect) |
|
| 104 |
|
| 105 |
-
**Models:**
|
| 106 |
-
|
| 107 |
-
| Column prefix | Model | Samples evaluated |
|
| 108 |
-
|---|---|---|
|
| 109 |
-
| `llm_gpt_oss_120b_` | GPT-OSS 120B | 12,509 |
|
| 110 |
-
| `llm_llama_3_3_70b_` | Llama 3.3 70B Instruct | 12,517 |
|
| 111 |
-
| `llm_mistral_small_24b_` | Mistral Small 24B Instruct | 12,301 |
|
| 112 |
-
| `llm_phi4_` | Phi-4 | 12,596 |
|
| 113 |
-
| `llm_codellama_13b_` | CodeLlama 13B Instruct | 3,470 |
|
| 114 |
|
| 115 |
### Code Metric Columns (224)
|
| 116 |
|
|
@@ -122,50 +112,6 @@ All prefixed with `metric_`. Values are floats (or null if unavailable).
|
|
| 122 |
|
| 123 |
**Opcode Statistics (39 columns)** — Python bytecode features: `num_opcodes`, `sum_opcodes`, `avg_opcode_count`, `min_opcode_count`, `max_opcode_count`, individual opcode counts (`opcode_1`, `opcode_83`, ...), `opcodes_used0`–`opcodes_used3`, and `top_0_opcode_name` through `top_19_opcode_name`.
|
| 124 |
|
| 125 |
-
## Key Results from the Paper
|
| 126 |
-
|
| 127 |
-
### LLM Performance on the I/O Consistency Task
|
| 128 |
-
|
| 129 |
-
| Model | Accuracy | F1 |
|
| 130 |
-
|---|---|---|
|
| 131 |
-
| GPT-OSS 120B | 0.960 | 0.959 |
|
| 132 |
-
| Mistral Small 24B Instruct | 0.744 | 0.685 |
|
| 133 |
-
| Llama 3.3 70B Instruct | 0.738 | 0.662 |
|
| 134 |
-
| Phi-4 | 0.733 | 0.674 |
|
| 135 |
-
| CodeLlama 13B Instruct | 0.506 | 0.062 |
|
| 136 |
-
|
| 137 |
-
### Do Human Metrics Predict LLM Success?
|
| 138 |
-
|
| 139 |
-
A classifier trained on the 224 code metrics achieves a mean **AUROC of 0.63** across models — weak correlation. A *shadow model* (fine-tuned UniXcoder on raw code + I/O) reaches **AUROC 0.86**, confirming LLMs use patterns not captured by traditional software engineering metrics.
|
| 140 |
-
|
| 141 |
-
## Example Usage
|
| 142 |
-
|
| 143 |
-
### Filter samples where GPT-OSS succeeded but Phi-4 failed
|
| 144 |
-
|
| 145 |
-
```python
|
| 146 |
-
ds_filtered = ds.filter(
|
| 147 |
-
lambda x: x["llm_gpt_oss_120b_success"] == True
|
| 148 |
-
and x["llm_phi4_success"] == False
|
| 149 |
-
)
|
| 150 |
-
print(f"{len(ds_filtered)} samples where GPT-OSS succeeded but Phi-4 failed")
|
| 151 |
-
```
|
| 152 |
-
|
| 153 |
-
### Analyze metrics vs. model success
|
| 154 |
-
|
| 155 |
-
```python
|
| 156 |
-
import pandas as pd
|
| 157 |
-
|
| 158 |
-
df = ds.to_pandas()
|
| 159 |
-
metric_cols = [c for c in df.columns if c.startswith("metric_") and df[c].dtype == "float64"]
|
| 160 |
-
|
| 161 |
-
# Compare mean complexity for GPT-OSS successes vs failures
|
| 162 |
-
success = df[df["llm_gpt_oss_120b_success"] == True][metric_cols].mean()
|
| 163 |
-
failure = df[df["llm_gpt_oss_120b_success"] == False][metric_cols].mean()
|
| 164 |
-
|
| 165 |
-
diff = (failure - success).sort_values(ascending=False)
|
| 166 |
-
print("Metrics most elevated in GPT-OSS failures:")
|
| 167 |
-
print(diff.head(10))
|
| 168 |
-
```
|
| 169 |
|
| 170 |
## Data Generation Pipeline
|
| 171 |
|
|
@@ -185,7 +131,7 @@ Python files (CodeNet)
|
|
| 185 |
This dataset
|
| 186 |
```
|
| 187 |
|
| 188 |
-
See the [GitHub repository](https://github.com/
|
| 189 |
|
| 190 |
## Citation
|
| 191 |
|
|
|
|
| 41 |
|
| 42 |
Dataset for the paper **"Beyond Accuracy: Characterizing Code Comprehension Capabilities in (Large) Language Models"** by Machtle, Serr, Loose & Eisenbarth (University of Luebeck).
|
| 43 |
|
| 44 |
+
[[Paper]](https://arxiv.org/abs/2601.12951) | [[Code]](https://github.com/UzL-ITS/code-comprehension-capabilities-llms/)
|
| 45 |
|
| 46 |
## Task
|
| 47 |
|
|
|
|
| 68 |
|
| 69 |
| | |
|
| 70 |
|---|---|
|
|
|
|
| 71 |
| **Columns** | 249 |
|
| 72 |
| **Source** | Python subset of [Project CodeNet](https://github.com/IBM/Project_CodeNet) |
|
| 73 |
| **I/O generation** | Type-aware fuzzing with hill-climbing type inference |
|
|
|
|
| 101 |
| `llm_{model}_num_correct` | int | Number of test cases answered correctly (out of `num_total`) |
|
| 102 |
| `llm_{model}_num_total` | int | Total test cases for this sample (typically 2: one correct, one incorrect) |
|
| 103 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 104 |
|
| 105 |
### Code Metric Columns (224)
|
| 106 |
|
|
|
|
| 112 |
|
| 113 |
**Opcode Statistics (39 columns)** — Python bytecode features: `num_opcodes`, `sum_opcodes`, `avg_opcode_count`, `min_opcode_count`, `max_opcode_count`, individual opcode counts (`opcode_1`, `opcode_83`, ...), `opcodes_used0`–`opcodes_used3`, and `top_0_opcode_name` through `top_19_opcode_name`.
|
| 114 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 115 |
|
| 116 |
## Data Generation Pipeline
|
| 117 |
|
|
|
|
| 131 |
This dataset
|
| 132 |
```
|
| 133 |
|
| 134 |
+
See the [GitHub repository](https://github.com/UzL-ITS/code-comprehension-capabilities-llms/) for the full pipeline code.
|
| 135 |
|
| 136 |
## Citation
|
| 137 |
|