--- language: - en license: mit size_categories: - 1K Companion data for the NeurIPS 2026 Evaluations & Datasets Track submission > *"TSFMI: A Baseline-Controlled Evaluation Protocol for Time-Series Foundation > Model Representations."* The code (anonymous) lives at > https://anonymous.4open.science/r/TSFMI. ## Why this dataset exists Probing time-series foundation models (TSFMs) is hard because high probe accuracy may reflect probe capacity rather than encoded knowledge. TSFMI addresses this with **explicit non-model controls** (hand-crafted features, raw signal, random projection, ROCKET) evaluated under the same canonical 60/20/20 split, the same sklearn estimator, the same 5 seeds, and the same bootstrap CI as the model probe. The synthetic datasets in this repository provide six standard temporal properties **plus five hard variants** under which TSFM representation claims can be evaluated against the controls. ## Configurations (11 tasks) | Config | Type | Classes / Range | Description | |---|---|---|---| | `trend` | classification | 3 (up / down / flat) | linear slope ± noise | | `seasonality` | regression | period ∈ {8, 16, 32, 64} | sinusoid + noise; label = period | | `frequency` | classification | 8 frequency bins | discrete sinusoidal frequency bands | | `stationarity` | classification | 2 (stationary / non-stationary) | Gaussian noise vs. random walk | | `anomaly` | classification | 2 (normal / anomaly) | single ±5σ point spike — **kurtosis is a sufficient statistic** | | `change_point` | classification | 2 (none / has-CP) | mean+variance shift at midpoint | | `*_hard` (5) | classification | varies | structured background + subtle anomalies / weak slopes / overlapping harmonics / variance drift / mild CP | Each config has 1000 sequences of length 512, partitioned 60/20/20 (train/val/test = 600/200/200). All data is generated deterministically with `seed=42` (data) and `split_seed=0` (partition), matching the canonical artefacts in the paper. ## Quick start ```python from datasets import load_dataset ds = load_dataset("EvalData/TSFMI", "anomaly") print(ds) # DatasetDict({ # train: 600, validation: 200, test: 200 # }) print(ds["train"][0]["sequence"][:8], ds["train"][0]["label"]) ``` Each row contains: - `sequence`: list[float] of length 512 (the time-series window) - `label`: int (classification) or float (seasonality regression) - `seed`: int (data-generation seed; always 42 here) - `split_seed`: int (split seed used to partition this row; 0 for the canonical artefacts) ## Reproduction recipe The same 60/20/20 + 5-seed + bootstrap protocol used in the paper is implemented in [the GitHub repository](https://anonymous.4open.science/r/TSFMI): ```bash curl -L -o TSFMI.zip "https://anonymous.4open.science/api/repo/TSFMI/zip" unzip TSFMI.zip && cd TSFMI python -m venv .venv && source .venv/bin/activate pip install -r requirements.txt && pip install -e ".[dev]" # CPU smoke test (<10 min) — reproduces the headline HC anomaly = 0.858 cell make smoke # Full canonical pipeline (~48 A100-hours) make extract-representations make reproduce-all ``` ## Headline result reproducible from this dataset Under the canonical 60/20/20 protocol with sklearn `LogisticRegression` and a bootstrap 95% CI over 5 seeds: - An 8-D **hand-crafted feature vector** reaches **0.858** test accuracy on the canonical `anomaly` configuration. - A **single kurtosis feature** alone reaches **0.859**. - A **single max-absolute-magnitude feature** alone reaches **0.907**. - The best of seven pre-trained TSFMs reaches **0.753** (TimesFM); all other TSFMs (MOMENT, Chronos, PatchTST, GPT4TS, Timer, Moirai) score 0.50–0.73. This **inversion** is what motivates the baseline-controlled discipline of TSFMI and is documented in §3.3 and Appendix A.8 of the paper. ## License MIT. The synthetic data is fully procedurally generated; there is no human-derived or scraped content. Real-world datasets used elsewhere by the TSFMI evaluation pipeline (ETTh1, Weather, Electricity, Traffic, Exchange Rate, UCR) are **not redistributed here** and remain under their respective original licenses; see the GitHub `LICENSE` file for details. ## Citation ```bibtex @misc{tsfmi2026, title = {{TSFMI}: A Baseline-Controlled Evaluation Protocol for Time-Series Foundation Model Representations}, author = {Anonymous Authors}, howpublished = {Anonymous submission to the NeurIPS 2026 Evaluations \& Datasets Track}, year = {2026}, url = {https://anonymous.4open.science/r/TSFMI} } ``` ## Responsible AI notes - **Data collection**: synthetic procedural generation, no human subjects, no scraping. - **Limitations**: each generator instantiates one statistically simple test bed; the canonical anomaly task is kurtosis-trivial by design (`realistic_anomaly` variant in the GitHub repo bounds this). - **Biases**: none — purely synthetic with no demographic content. - **Personal/sensitive info**: none. - **Use cases**: probing TSFM internal representations, calibrating new probing protocols against simple non-model baselines. - **Misuse cases**: not intended as a production anomaly detector; do not interpret the canonical anomaly task as evidence that any model "detects anomalies" in any operational sense. - **Synthetic data indicator**: 100% synthetic. The full Croissant 1.0 + RAI metadata is provided as `CROISSANT.json` in this repository.