GraphIDS / README.md
buckeyeguy's picture
Upload README.md with huggingface_hub
06f263a verified
metadata
license: cc-by-4.0
task_categories:
  - tabular-classification
  - graph-ml
tags:
  - intrusion-detection
  - CAN-bus
  - graph-neural-networks
  - knowledge-distillation
pretty_name: GraphIDS Paper Data

GraphIDS — Paper Data

Evaluation artifacts for "Adaptive Fusion of Graph-Based Ensembles for Automotive Intrusion Detection".

Consumed by kd-gat-paper via data/pull_data.py.

Schema Contract

If you change column names or file structure, pull_data.py will fail. The input schema is enforced in pull_data.py:INPUT_SCHEMA.

metrics.parquet

Per-model evaluation metrics across all runs.

Column Type Description
run_id str Run identifier, e.g. hcrl_sa/eval_large_evaluation
model str Model name: gat, vgae, fusion
accuracy float Classification accuracy
precision float Precision
recall float Recall (sensitivity)
f1 float F1 score
specificity float Specificity (TNR)
balanced_accuracy float Balanced accuracy
mcc float Matthews correlation coefficient
fpr float False positive rate
fnr float False negative rate
auc float Area under ROC curve
n_samples float Number of evaluation samples
dataset str Dataset name: hcrl_sa, hcrl_ch, set_01set_04

embeddings.parquet

2D UMAP projections of graph embeddings per model.

Column Type Description
run_id str Run identifier
model str Model that produced the embedding: gat, vgae
x float UMAP dimension 1
y float UMAP dimension 2
label int Ground truth: 0 = normal, 1 = attack

cka_similarity.parquet

CKA similarity between teacher and student layers (KD runs only).

Column Type Description
run_id str Run identifier (only *_kd runs)
dataset str Dataset name
teacher_layer str Teacher layer name, e.g. Teacher L1
student_layer str Student layer name, e.g. Student L1
similarity float CKA similarity score (0–1)

dqn_policy.parquet

DQN fusion weight (alpha) per evaluated graph.

Column Type Description
run_id str Run identifier
dataset str Dataset name
scale str Model scale: large, small
has_kd int Whether KD was used: 0 or 1
action_idx int Graph index within the evaluation set
alpha float Fusion weight (0 = full VGAE, 1 = full GAT)

Note: Lacks per-graph label and attack_type. The paper figure needs these fields joined from evaluation results. This is a known gap — see pull_data.py skip message.

recon_errors.parquet

VGAE reconstruction error per evaluated graph.

Column Type Description
run_id str Run identifier
error float Scalar reconstruction error
label int Ground truth: 0 = normal, 1 = attack

Note: Single scalar error — no per-component decomposition (Node Recon, CAN ID, Neighbor, KL). The paper figure needs the component breakdown. This is a known gap.

attention_weights.parquet

Mean GAT attention weights aggregated per head.

Column Type Description
run_id str Run identifier
sample_idx int Graph sample index
label int Ground truth: 0 = normal, 1 = attack
layer int GAT layer index
head int Attention head index
mean_alpha float Mean attention weight for this head

Note: Aggregated per-head, not per-edge. The paper figure needs per-edge attention weights with node positions. This is a known gap.

graph_samples.json

Raw CAN bus graph instances with node/edge features.

Top-level keys: schema_version, exported_at, data, feature_names.

Each item in data:

  • dataset: str — dataset name
  • label: int — 0/1
  • attack_type: int — attack type code
  • attack_type_name: str — human-readable name
  • nodes: list of {id, features, node_y, node_attack_type, node_attack_type_name}
  • links: list of {source, target, edge_features}
  • num_nodes, num_edges: int
  • id_entropy, stats: additional metadata

metrics/*.json

Per-configuration evaluation results. 18 files covering 6 datasets x 3 configs (large, small, small_kd).

Each file: {schema_version, exported_at, data: [{model, scenario, metric_name, value}]}

Currently only contains val scenario — cross-dataset test scenarios are not yet exported.

Other files

File Description
leaderboard.json Cross-dataset model comparison (all metrics, all runs)
model_sizes.json Parameter counts per model type and scale
training_curves.parquet Loss/accuracy curves over training epochs
graph_statistics.parquet Per-graph structural statistics
datasets.json Dataset metadata
runs.json Run metadata
kd_transfer.json Knowledge distillation transfer metrics

Run ID Convention

Format: {dataset}/{eval_config}

  • Datasets: hcrl_sa, hcrl_ch, set_01 through set_04
  • Configs: eval_large_evaluation, eval_small_evaluation, eval_small_evaluation_kd

The paper defaults to hcrl_sa/eval_large_evaluation for main results and hcrl_sa/eval_small_evaluation_kd for CKA.

Known Gaps

These files need richer exports from the KD-GAT pipeline:

  1. dqn_policy.parquet — needs per-graph label + attack_type columns
  2. recon_errors.parquet — needs per-component error decomposition
  3. attention_weights.parquet — needs per-edge weights + node positions
  4. metrics/*.json — needs cross-dataset test scenario results

Until these are addressed, pull_data.py preserves existing committed files for the affected figures.