Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
clip_line_plot: struct<auc_pr: double, auc_roc: double>
child 0, auc_pr: double
child 1, auc_roc: double
clip_recurrence_plot: struct<auc_pr: double, auc_roc: double>
child 0, auc_pr: double
child 1, auc_roc: double
dinov2_line_plot: struct<auc_pr: double, auc_roc: double>
child 0, auc_pr: double
child 1, auc_roc: double
dinov2_recurrence_plot: struct<auc_pr: double, auc_roc: double>
child 0, auc_pr: double
child 1, auc_roc: double
alpha_values: list<item: double>
child 0, item: double
results: list<item: struct<dataset: string, fixed: struct<0.01: struct<actual_far_mean: double, target_far_me (... 1246 chars omitted)
child 0, item: struct<dataset: string, fixed: struct<0.01: struct<actual_far_mean: double, target_far_mean: double, (... 1234 chars omitted)
child 0, dataset: string
child 1, fixed: struct<0.01: struct<actual_far_mean: double, target_far_mean: double, coverage_mean: double, target_ (... 598 chars omitted)
child 0, 0.01: struct<actual_far_mean: double, target_far_mean: double, coverage_mean: double, target_far: double, (... 123 chars omitted)
child 0, actual_far_mean: double
child 1, target_far_mean: double
child 2, coverage_mean: double
child 3, target_far: double
child 4, actual_far: double
child 5, coverage: double
child 6, threshold: double
child 7, n_calibration: int64
child 8, n_test_normal:
...
0.01: struct<empirical_far_mean: double, avg_alarms: double, avg_predictions: double, empirical_far: doubl (... 88 chars omitted)
child 0, empirical_far_mean: double
child 1, avg_alarms: double
child 2, avg_predictions: double
child 3, empirical_far: double
child 4, n_alarms: int64
child 5, n_predictions: int64
child 6, drift_detected: bool
child 7, final_threshold: double
child 1, 0.05: struct<empirical_far_mean: double, avg_alarms: double, avg_predictions: double, empirical_far: doubl (... 88 chars omitted)
child 0, empirical_far_mean: double
child 1, avg_alarms: double
child 2, avg_predictions: double
child 3, empirical_far: double
child 4, n_alarms: int64
child 5, n_predictions: int64
child 6, drift_detected: bool
child 7, final_threshold: double
child 2, 0.1: struct<empirical_far_mean: double, avg_alarms: double, avg_predictions: double, empirical_far: doubl (... 88 chars omitted)
child 0, empirical_far_mean: double
child 1, avg_alarms: double
child 2, avg_predictions: double
child 3, empirical_far: double
child 4, n_alarms: int64
child 5, n_predictions: int64
child 6, drift_detected: bool
child 7, final_threshold: double
to
{'alpha_values': List(Value('float64')), 'results': List({'dataset': Value('string'), 'fixed': {'0.01': {'actual_far_mean': Value('float64'), 'target_far_mean': Value('float64'), 'coverage_mean': Value('float64'), 'target_far': Value('float64'), 'actual_far': Value('float64'), 'coverage': Value('float64'), 'threshold': Value('float64'), 'n_calibration': Value('int64'), 'n_test_normal': Value('int64'), 'n_test_anomaly': Value('int64')}, '0.05': {'actual_far_mean': Value('float64'), 'target_far_mean': Value('float64'), 'coverage_mean': Value('float64'), 'target_far': Value('float64'), 'actual_far': Value('float64'), 'coverage': Value('float64'), 'threshold': Value('float64'), 'n_calibration': Value('int64'), 'n_test_normal': Value('int64'), 'n_test_anomaly': Value('int64')}, '0.1': {'actual_far_mean': Value('float64'), 'target_far_mean': Value('float64'), 'coverage_mean': Value('float64'), 'target_far': Value('float64'), 'actual_far': Value('float64'), 'coverage': Value('float64'), 'threshold': Value('float64'), 'n_calibration': Value('int64'), 'n_test_normal': Value('int64'), 'n_test_anomaly': Value('int64')}}, 'rolling': {'0.01': {'empirical_far_mean': Value('float64'), 'avg_alarms': Value('float64'), 'avg_predictions': Value('float64'), 'empirical_far': Value('float64'), 'n_alarms': Value('int64'), 'n_predictions': Value('int64'), 'drift_detected': Value('bool'), 'final_threshold': Value('float64')}, '0.05': {'empirical_far_mean': Value('float64'), 'avg_alarms': Value('float64'), 'avg_predictions': Value('float64'), 'empirical_far': Value('float64'), 'n_alarms': Value('int64'), 'n_predictions': Value('int64'), 'drift_detected': Value('bool'), 'final_threshold': Value('float64')}, '0.1': {'empirical_far_mean': Value('float64'), 'avg_alarms': Value('float64'), 'avg_predictions': Value('float64'), 'empirical_far': Value('float64'), 'n_alarms': Value('int64'), 'n_predictions': Value('int64'), 'drift_detected': Value('bool'), 'final_threshold': Value('float64')}}})}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
clip_line_plot: struct<auc_pr: double, auc_roc: double>
child 0, auc_pr: double
child 1, auc_roc: double
clip_recurrence_plot: struct<auc_pr: double, auc_roc: double>
child 0, auc_pr: double
child 1, auc_roc: double
dinov2_line_plot: struct<auc_pr: double, auc_roc: double>
child 0, auc_pr: double
child 1, auc_roc: double
dinov2_recurrence_plot: struct<auc_pr: double, auc_roc: double>
child 0, auc_pr: double
child 1, auc_roc: double
alpha_values: list<item: double>
child 0, item: double
results: list<item: struct<dataset: string, fixed: struct<0.01: struct<actual_far_mean: double, target_far_me (... 1246 chars omitted)
child 0, item: struct<dataset: string, fixed: struct<0.01: struct<actual_far_mean: double, target_far_mean: double, (... 1234 chars omitted)
child 0, dataset: string
child 1, fixed: struct<0.01: struct<actual_far_mean: double, target_far_mean: double, coverage_mean: double, target_ (... 598 chars omitted)
child 0, 0.01: struct<actual_far_mean: double, target_far_mean: double, coverage_mean: double, target_far: double, (... 123 chars omitted)
child 0, actual_far_mean: double
child 1, target_far_mean: double
child 2, coverage_mean: double
child 3, target_far: double
child 4, actual_far: double
child 5, coverage: double
child 6, threshold: double
child 7, n_calibration: int64
child 8, n_test_normal:
...
0.01: struct<empirical_far_mean: double, avg_alarms: double, avg_predictions: double, empirical_far: doubl (... 88 chars omitted)
child 0, empirical_far_mean: double
child 1, avg_alarms: double
child 2, avg_predictions: double
child 3, empirical_far: double
child 4, n_alarms: int64
child 5, n_predictions: int64
child 6, drift_detected: bool
child 7, final_threshold: double
child 1, 0.05: struct<empirical_far_mean: double, avg_alarms: double, avg_predictions: double, empirical_far: doubl (... 88 chars omitted)
child 0, empirical_far_mean: double
child 1, avg_alarms: double
child 2, avg_predictions: double
child 3, empirical_far: double
child 4, n_alarms: int64
child 5, n_predictions: int64
child 6, drift_detected: bool
child 7, final_threshold: double
child 2, 0.1: struct<empirical_far_mean: double, avg_alarms: double, avg_predictions: double, empirical_far: doubl (... 88 chars omitted)
child 0, empirical_far_mean: double
child 1, avg_alarms: double
child 2, avg_predictions: double
child 3, empirical_far: double
child 4, n_alarms: int64
child 5, n_predictions: int64
child 6, drift_detected: bool
child 7, final_threshold: double
to
{'alpha_values': List(Value('float64')), 'results': List({'dataset': Value('string'), 'fixed': {'0.01': {'actual_far_mean': Value('float64'), 'target_far_mean': Value('float64'), 'coverage_mean': Value('float64'), 'target_far': Value('float64'), 'actual_far': Value('float64'), 'coverage': Value('float64'), 'threshold': Value('float64'), 'n_calibration': Value('int64'), 'n_test_normal': Value('int64'), 'n_test_anomaly': Value('int64')}, '0.05': {'actual_far_mean': Value('float64'), 'target_far_mean': Value('float64'), 'coverage_mean': Value('float64'), 'target_far': Value('float64'), 'actual_far': Value('float64'), 'coverage': Value('float64'), 'threshold': Value('float64'), 'n_calibration': Value('int64'), 'n_test_normal': Value('int64'), 'n_test_anomaly': Value('int64')}, '0.1': {'actual_far_mean': Value('float64'), 'target_far_mean': Value('float64'), 'coverage_mean': Value('float64'), 'target_far': Value('float64'), 'actual_far': Value('float64'), 'coverage': Value('float64'), 'threshold': Value('float64'), 'n_calibration': Value('int64'), 'n_test_normal': Value('int64'), 'n_test_anomaly': Value('int64')}}, 'rolling': {'0.01': {'empirical_far_mean': Value('float64'), 'avg_alarms': Value('float64'), 'avg_predictions': Value('float64'), 'empirical_far': Value('float64'), 'n_alarms': Value('int64'), 'n_predictions': Value('int64'), 'drift_detected': Value('bool'), 'final_threshold': Value('float64')}, '0.05': {'empirical_far_mean': Value('float64'), 'avg_alarms': Value('float64'), 'avg_predictions': Value('float64'), 'empirical_far': Value('float64'), 'n_alarms': Value('int64'), 'n_predictions': Value('int64'), 'drift_detected': Value('bool'), 'final_threshold': Value('float64')}, '0.1': {'empirical_far_mean': Value('float64'), 'avg_alarms': Value('float64'), 'avg_predictions': Value('float64'), 'empirical_far': Value('float64'), 'n_alarms': Value('int64'), 'n_predictions': Value('int64'), 'drift_detected': Value('bool'), 'final_threshold': Value('float64')}}})}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
VITS-AD Evaluation Suite
Companion artifact for the NeurIPS 2026 Evaluations & Datasets (E&D) Track submission VITS-AD: A Regime-Aware Evaluation Suite for Frozen-Vision Time-Series Anomaly Detection.
This dataset is not a new corpus. It bundles the evaluation outputs produced by the VITS-AD pipeline and the raw-space Mahalanobis baseline so that future work can:
- Reproduce paper tables and statistical tests without re-running the full vision pipeline.
- Run paired comparisons (Wilcoxon, paired-99 UCR) directly on the per-window scores.
- Audit the regime classification (amplitude vs. structural) against the underlying evidence artifacts.
The submission is double-blind; this dataset card is anonymous. Source
code is at https://github.com/evaldataset/VITS-AD (reviewer-routed via
anonymous.4open.science).
Contents
| Folder | Purpose | Size |
|---|---|---|
regime_labels/ |
Per-dataset regime annotation + classifier features | <1 KB |
ledgers/ |
JSON ledgers underlying main-paper claims | 64 KB |
ucr_canonical/ |
UCR 109/99 aggregated and per-series metrics | 168 KB |
multiseed_scores/ |
Per-window scores + labels for 5 seeds × {LP,RP} × {PSM,MSL,SMAP}, no model weights | 9.8 MB |
sample_renderings/ |
Pipeline diagram and regime-gain figures | 1 MB |
Total: ~11 MB.
What this is for (E&D Track scope)
The submission's contribution is benchmark analysis and evaluation methodology, not a new dataset. We therefore distribute:
- The regime axis (amplitude vs. structural) along which vision rendering does and does not pay off.
- The paired-99 UCR comparison that demonstrates Wilcoxon $p<10^{-7}$ in favour of the vision pipeline on the structural univariate regime.
- The per-seed scores that allow re-running paired Wilcoxon and bootstrap confidence intervals on the multiseed PSM/MSL/SMAP runs.
- The calibration and FPS ledgers that back the compute-disclosure and CalibGuard tables in the paper supplement.
We do not redistribute the raw benchmark datasets (SMD, PSM, MSL, SMAP, UCR Anomaly Archive). License and download paths for those upstream benchmarks are listed in the paper's Asset Credits table.
Files
regime_labels/regime_labels.json
Per-dataset regime label, channel count, raw vs. VITS-AD AUC-ROC, and the five classifier features. The accompanying notes record the in-sample classifier accuracy ($90.1%$), the majority-class baseline ($88.1%$), and the leave-one-dataset-out CV collapse to chance — i.e., the regime axis is descriptive, not a deployable predictor.
ledgers/
| File | Backed claim |
|---|---|
improved_ensemble_results.json |
SMD 28-entity macro AUC-ROC for VITS-AD vs. raw Mahalanobis |
multiseed_results.json |
$n=5$ seed mean ± std for PSM/MSL/SMAP × {LP, RP} |
multiseed_ensemble_summary.json |
Rank-mean ensemble across renderers per dataset |
optimized_ensemble.json |
Oracle renderer-adaptive scoring |
calibguard_multidataset.json |
Realized FAR vs. target FAR (empirical diagnostic) |
fps_benchmark.json |
FPS, parameter count, and compute disclosure |
clip_backbone_comparison.json |
DINOv2 vs. CLIP backbone ablation |
ucr_results.json |
Legacy UCR aggregate (paired-99 in ucr_canonical/) |
view_disagree_sweep.json |
Cross-view disagreement scoring sweep |
ucr_canonical/
Authoritative UCR ledgers used by every UCR claim in the paper:
| File | Description |
|---|---|
summary.json |
109-series VITS-AD aggregate |
paired_99.json |
99-series paired comparison vs. raw Mahalanobis |
combined_109.json |
Per-series VITS-AD scores |
per_series.json |
Per-series metric breakdown |
eligible_list.json |
List of the 109 eligible UCR series |
ucr_canonical.json |
Combined manifest |
multiseed_scores/
Layout: multiseed_scores/{psm,msl,smap}/{line_plot,recurrence_plot}/seed_{42,123,456,789,2024}/
Each leaf directory contains:
scores.npy— per-window anomaly score (float64)labels.npy— per-window ground-truth label (int64, ${0,1}$)metrics.json— AUC-ROC, AUC-PR, best-F1, F1-PA for that seed
Model checkpoints (best_model.pt) are intentionally not redistributed
to keep the bundle compact; they can be regenerated from the source repo.
sample_renderings/
PDFs of the pipeline diagram (Figure 1), per-dataset regime gain (Figure 3a), and the regime-map scatter (Figure 4 in the supplement).
Reproducing the paper's statistical tests
import json, numpy as np
from scipy.stats import wilcoxon
base = "multiseed_scores/psm/line_plot"
seeds = [42, 123, 456, 789, 2024]
aucs = []
for s in seeds:
m = json.load(open(f"{base}/seed_{s}/metrics.json"))
aucs.append(m["auc_roc"])
print(f"PSM-LP mean ± std: {np.mean(aucs):.4f} ± {np.std(aucs, ddof=1):.4f}")
# Paired-99 UCR Wilcoxon (vision vs. raw Mahalanobis)
paired = json.load(open("ucr_canonical/paired_99.json"))
stat, p = wilcoxon(paired["vits_ad_auc"], paired["raw_maha_auc"])
print(f"Paired-99 UCR Wilcoxon p={p:.2e}")
License
MIT. All redistributed JSON ledgers, regime annotations, and rendered example PDFs are original work of the (anonymous) authors and are released under MIT alongside the source repository.
Citation
@inproceedings{vitsad2026,
title = {{VITS-AD}: A Regime-Aware Evaluation Suite for Frozen-Vision Time-Series Anomaly Detection},
author = {Anonymous},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track},
year = {2026}
}
- Downloads last month
- 10