File size: 5,878 Bytes
06f263a | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 | ---
license: cc-by-4.0
task_categories:
- tabular-classification
- graph-ml
tags:
- intrusion-detection
- CAN-bus
- graph-neural-networks
- knowledge-distillation
pretty_name: GraphIDS Paper Data
---
# GraphIDS — Paper Data
Evaluation artifacts for "Adaptive Fusion of Graph-Based Ensembles for Automotive Intrusion Detection".
Consumed by [kd-gat-paper](https://github.com/frenken-lab/kd-gat-paper) via `data/pull_data.py`.
## Schema Contract
**If you change column names or file structure, `pull_data.py` will fail.**
The input schema is enforced in `pull_data.py:INPUT_SCHEMA`.
### metrics.parquet
Per-model evaluation metrics across all runs.
| Column | Type | Description |
|---|---|---|
| `run_id` | str | Run identifier, e.g. `hcrl_sa/eval_large_evaluation` |
| `model` | str | Model name: `gat`, `vgae`, `fusion` |
| `accuracy` | float | Classification accuracy |
| `precision` | float | Precision |
| `recall` | float | Recall (sensitivity) |
| `f1` | float | F1 score |
| `specificity` | float | Specificity (TNR) |
| `balanced_accuracy` | float | Balanced accuracy |
| `mcc` | float | Matthews correlation coefficient |
| `fpr` | float | False positive rate |
| `fnr` | float | False negative rate |
| `auc` | float | Area under ROC curve |
| `n_samples` | float | Number of evaluation samples |
| `dataset` | str | Dataset name: `hcrl_sa`, `hcrl_ch`, `set_01`–`set_04` |
### embeddings.parquet
2D UMAP projections of graph embeddings per model.
| Column | Type | Description |
|---|---|---|
| `run_id` | str | Run identifier |
| `model` | str | Model that produced the embedding: `gat`, `vgae` |
| `x` | float | UMAP dimension 1 |
| `y` | float | UMAP dimension 2 |
| `label` | int | Ground truth: 0 = normal, 1 = attack |
### cka_similarity.parquet
CKA similarity between teacher and student layers (KD runs only).
| Column | Type | Description |
|---|---|---|
| `run_id` | str | Run identifier (only `*_kd` runs) |
| `dataset` | str | Dataset name |
| `teacher_layer` | str | Teacher layer name, e.g. `Teacher L1` |
| `student_layer` | str | Student layer name, e.g. `Student L1` |
| `similarity` | float | CKA similarity score (0–1) |
### dqn_policy.parquet
DQN fusion weight (alpha) per evaluated graph.
| Column | Type | Description |
|---|---|---|
| `run_id` | str | Run identifier |
| `dataset` | str | Dataset name |
| `scale` | str | Model scale: `large`, `small` |
| `has_kd` | int | Whether KD was used: 0 or 1 |
| `action_idx` | int | Graph index within the evaluation set |
| `alpha` | float | Fusion weight (0 = full VGAE, 1 = full GAT) |
**Note:** Lacks per-graph `label` and `attack_type`. The paper figure needs these fields joined from evaluation results. This is a known gap — see `pull_data.py` skip message.
### recon_errors.parquet
VGAE reconstruction error per evaluated graph.
| Column | Type | Description |
|---|---|---|
| `run_id` | str | Run identifier |
| `error` | float | Scalar reconstruction error |
| `label` | int | Ground truth: 0 = normal, 1 = attack |
**Note:** Single scalar error — no per-component decomposition (Node Recon, CAN ID, Neighbor, KL). The paper figure needs the component breakdown. This is a known gap.
### attention_weights.parquet
Mean GAT attention weights aggregated per head.
| Column | Type | Description |
|---|---|---|
| `run_id` | str | Run identifier |
| `sample_idx` | int | Graph sample index |
| `label` | int | Ground truth: 0 = normal, 1 = attack |
| `layer` | int | GAT layer index |
| `head` | int | Attention head index |
| `mean_alpha` | float | Mean attention weight for this head |
**Note:** Aggregated per-head, not per-edge. The paper figure needs per-edge attention weights with node positions. This is a known gap.
### graph_samples.json
Raw CAN bus graph instances with node/edge features.
Top-level keys: `schema_version`, `exported_at`, `data`, `feature_names`.
Each item in `data`:
- `dataset`: str — dataset name
- `label`: int — 0/1
- `attack_type`: int — attack type code
- `attack_type_name`: str — human-readable name
- `nodes`: list of `{id, features, node_y, node_attack_type, node_attack_type_name}`
- `links`: list of `{source, target, edge_features}`
- `num_nodes`, `num_edges`: int
- `id_entropy`, `stats`: additional metadata
### metrics/*.json
Per-configuration evaluation results. 18 files covering 6 datasets x 3 configs (large, small, small_kd).
Each file: `{schema_version, exported_at, data: [{model, scenario, metric_name, value}]}`
Currently only contains `val` scenario — cross-dataset test scenarios are not yet exported.
### Other files
| File | Description |
|---|---|
| `leaderboard.json` | Cross-dataset model comparison (all metrics, all runs) |
| `model_sizes.json` | Parameter counts per model type and scale |
| `training_curves.parquet` | Loss/accuracy curves over training epochs |
| `graph_statistics.parquet` | Per-graph structural statistics |
| `datasets.json` | Dataset metadata |
| `runs.json` | Run metadata |
| `kd_transfer.json` | Knowledge distillation transfer metrics |
## Run ID Convention
Format: `{dataset}/{eval_config}`
- Datasets: `hcrl_sa`, `hcrl_ch`, `set_01` through `set_04`
- Configs: `eval_large_evaluation`, `eval_small_evaluation`, `eval_small_evaluation_kd`
The paper defaults to `hcrl_sa/eval_large_evaluation` for main results and `hcrl_sa/eval_small_evaluation_kd` for CKA.
## Known Gaps
These files need richer exports from the KD-GAT pipeline:
1. **dqn_policy.parquet** — needs per-graph `label` + `attack_type` columns
2. **recon_errors.parquet** — needs per-component error decomposition
3. **attention_weights.parquet** — needs per-edge weights + node positions
4. **metrics/*.json** — needs cross-dataset test scenario results
Until these are addressed, `pull_data.py` preserves existing committed files for the affected figures.
|