PerkinsFund's picture
Update README.md
032e84e verified
metadata
license: cc-by-4.0
task_categories:
  - tabular-classification
tags:
  - security
  - cybersecurity
  - malware
  - malware-detection
  - windows
  - windows-pe
  - tabular
  - classification
  - binary-classification
  - evaluation
pretty_name: Traceix Mini Eval  Windows PE
size_categories:
  - n<1K

Traceix Mini Evaluation Dataset (Windows PE)

Traceix is a malware analysis platform that uses a neural network named AURA to classify files as safe or malicious. You can use Traceix at https://traceix.com.

This repository contains a mini evaluation dataset so that anyone can peer review AURA’s file-level classifications and recompute the basic metrics (accuracy, precision, recall, FPR, FNR) used in the Traceix model-quality page.

Each row includes:

  • sha256
  • true_label
  • predicted_label
  • is_correct
  • model_version
  • split

You can try it yourself with:

from datasets import load_dataset
from sklearn.metrics import confusion_matrix

ds = load_dataset("perkinsfund/aura-windows-pe-eval-v01", split="train")

true = ds["true_label"]
pred = ds["predicted_label"]

label_to_int = {"safe": 0, "malicious": 1}
y_true = [label_to_int[x] for x in true]
y_pred = [label_to_int[x] for x in pred]

tn, fp, fn, tp = confusion_matrix(y_true, y_pred, labels=[0, 1]).ravel()

total = tn + fp + fn + tp

accuracy = (tn + tp) / total
precision = tp / (tp + fp) if (tp + fp) else 0.0
recall = tp / (tp + fn) if (tp + fn) else 0.0
fpr = fp / (fp + tn) if (fp + tn) else 0.0
fnr = fn / (fn + tp) if (fn + tp) else 0.0

print("TN, FP, FN, TP:", tn, fp, fn, tp)
print("Accuracy:        {:.4f}".format(accuracy))
print("Precision (mal): {:.4f}".format(precision))
print("Recall (mal):    {:.4f}".format(recall))
print("FPR:             {:.4f}".format(fpr))
print("FNR:             {:.4f}".format(fnr))