VITS-AD / README.md
EvalData's picture
Initial release: VITS-AD evaluation suite (regime labels, ledgers, multiseed scores, sample renderings)
d3e570a verified
metadata
license: mit
language:
  - en
pretty_name: VITS-AD Evaluation Suite
size_categories:
  - 10K<n<100K
task_categories:
  - time-series-forecasting
tags:
  - anomaly-detection
  - time-series
  - evaluation
  - benchmark
  - frozen-vision
  - regime-analysis
  - negative-results
  - neurips-2026

VITS-AD Evaluation Suite

Companion artifact for the NeurIPS 2026 Evaluations & Datasets (E&D) Track submission VITS-AD: A Regime-Aware Evaluation Suite for Frozen-Vision Time-Series Anomaly Detection.

This dataset is not a new corpus. It bundles the evaluation outputs produced by the VITS-AD pipeline and the raw-space Mahalanobis baseline so that future work can:

  1. Reproduce paper tables and statistical tests without re-running the full vision pipeline.
  2. Run paired comparisons (Wilcoxon, paired-99 UCR) directly on the per-window scores.
  3. Audit the regime classification (amplitude vs. structural) against the underlying evidence artifacts.

The submission is double-blind; this dataset card is anonymous. Source code is at https://github.com/evaldataset/VITS-AD (reviewer-routed via anonymous.4open.science).

Contents

Folder Purpose Size
regime_labels/ Per-dataset regime annotation + classifier features <1 KB
ledgers/ JSON ledgers underlying main-paper claims 64 KB
ucr_canonical/ UCR 109/99 aggregated and per-series metrics 168 KB
multiseed_scores/ Per-window scores + labels for 5 seeds × {LP,RP} × {PSM,MSL,SMAP}, no model weights 9.8 MB
sample_renderings/ Pipeline diagram and regime-gain figures 1 MB

Total: ~11 MB.

What this is for (E&D Track scope)

The submission's contribution is benchmark analysis and evaluation methodology, not a new dataset. We therefore distribute:

  • The regime axis (amplitude vs. structural) along which vision rendering does and does not pay off.
  • The paired-99 UCR comparison that demonstrates Wilcoxon $p<10^{-7}$ in favour of the vision pipeline on the structural univariate regime.
  • The per-seed scores that allow re-running paired Wilcoxon and bootstrap confidence intervals on the multiseed PSM/MSL/SMAP runs.
  • The calibration and FPS ledgers that back the compute-disclosure and CalibGuard tables in the paper supplement.

We do not redistribute the raw benchmark datasets (SMD, PSM, MSL, SMAP, UCR Anomaly Archive). License and download paths for those upstream benchmarks are listed in the paper's Asset Credits table.

Files

regime_labels/regime_labels.json

Per-dataset regime label, channel count, raw vs. VITS-AD AUC-ROC, and the five classifier features. The accompanying notes record the in-sample classifier accuracy ($90.1%$), the majority-class baseline ($88.1%$), and the leave-one-dataset-out CV collapse to chance — i.e., the regime axis is descriptive, not a deployable predictor.

ledgers/

File Backed claim
improved_ensemble_results.json SMD 28-entity macro AUC-ROC for VITS-AD vs. raw Mahalanobis
multiseed_results.json $n=5$ seed mean ± std for PSM/MSL/SMAP × {LP, RP}
multiseed_ensemble_summary.json Rank-mean ensemble across renderers per dataset
optimized_ensemble.json Oracle renderer-adaptive scoring
calibguard_multidataset.json Realized FAR vs. target FAR (empirical diagnostic)
fps_benchmark.json FPS, parameter count, and compute disclosure
clip_backbone_comparison.json DINOv2 vs. CLIP backbone ablation
ucr_results.json Legacy UCR aggregate (paired-99 in ucr_canonical/)
view_disagree_sweep.json Cross-view disagreement scoring sweep

ucr_canonical/

Authoritative UCR ledgers used by every UCR claim in the paper:

File Description
summary.json 109-series VITS-AD aggregate
paired_99.json 99-series paired comparison vs. raw Mahalanobis
combined_109.json Per-series VITS-AD scores
per_series.json Per-series metric breakdown
eligible_list.json List of the 109 eligible UCR series
ucr_canonical.json Combined manifest

multiseed_scores/

Layout: multiseed_scores/{psm,msl,smap}/{line_plot,recurrence_plot}/seed_{42,123,456,789,2024}/

Each leaf directory contains:

  • scores.npy — per-window anomaly score (float64)
  • labels.npy — per-window ground-truth label (int64, ${0,1}$)
  • metrics.json — AUC-ROC, AUC-PR, best-F1, F1-PA for that seed

Model checkpoints (best_model.pt) are intentionally not redistributed to keep the bundle compact; they can be regenerated from the source repo.

sample_renderings/

PDFs of the pipeline diagram (Figure 1), per-dataset regime gain (Figure 3a), and the regime-map scatter (Figure 4 in the supplement).

Reproducing the paper's statistical tests

import json, numpy as np
from scipy.stats import wilcoxon

base = "multiseed_scores/psm/line_plot"
seeds = [42, 123, 456, 789, 2024]
aucs = []
for s in seeds:
    m = json.load(open(f"{base}/seed_{s}/metrics.json"))
    aucs.append(m["auc_roc"])
print(f"PSM-LP mean ± std: {np.mean(aucs):.4f} ± {np.std(aucs, ddof=1):.4f}")

# Paired-99 UCR Wilcoxon (vision vs. raw Mahalanobis)
paired = json.load(open("ucr_canonical/paired_99.json"))
stat, p = wilcoxon(paired["vits_ad_auc"], paired["raw_maha_auc"])
print(f"Paired-99 UCR Wilcoxon p={p:.2e}")

License

MIT. All redistributed JSON ledgers, regime annotations, and rendered example PDFs are original work of the (anonymous) authors and are released under MIT alongside the source repository.

Citation

@inproceedings{vitsad2026,
  title  = {{VITS-AD}: A Regime-Aware Evaluation Suite for Frozen-Vision Time-Series Anomaly Detection},
  author = {Anonymous},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track},
  year   = {2026}
}