task string | divergence_threshold float64 | high_regret_threshold float64 | bootstrap_n int64 | n_records_loaded int64 | n_cells int64 | domains list | models list | shift_profiles dict | shift_dim_names list | arch_types list | strategies list |
|---|---|---|---|---|---|---|---|---|---|---|---|
shift_aware_adaptation_selector | 50 | 0.5 | 10,000 | 309 | 12 | [
"ett_m1",
"finance",
"physionet",
"smd"
] | [
"chronos",
"moirai",
"moment"
] | {
"ett_m1": [
9.5781,
0.05,
0.5,
1.5,
0.1
],
"finance": [
0.127,
0.02,
0.2,
0.8,
0.05
],
"smd": [
0.5,
0.03,
0.3,
1,
0.08
],
"physionet": [
0.8492999999999997,
0.000002764251870481371,
1.1292461588255478,
0.005400047165808219,
... | [
"amplitude",
"spectral",
"acf",
"nonstationarity",
"irregularity"
] | [
"enc_dec",
"enc_only",
"any_variate",
"dec_only"
] | [
{
"strategy": "global_best",
"n_folds": 4,
"n_cells_total": 12,
"mean_top1_accuracy": 0.41666666666666663,
"std_top1_accuracy": 0.14433756729740643,
"top1_ci_lower": 0.16666666666666666,
"top1_ci_upper": 0.6666666666666666,
"mean_top2_accuracy": 0.41666666666666663,
"mean_relativ... |
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
TSFM-PEFT-Bench
A cross-architecture benchmark for evaluating Parameter-Efficient Fine-Tuning (PEFT) recommendation reliability in Time Series Foundation Models (TSFMs). Companion code and artifacts for the paper "TSFM-PEFT-Bench: A Cross-Architecture Benchmark for PEFT Selection in Time Series Foundation Models" (under double-blind review at NeurIPS 2026 Datasets and Benchmarks Track).
Quick metadata:
- License: Apache-2.0 (
LICENSE) - Croissant manifest:
tsfm_peft_bench.croissant.json(MLCommons Croissant 1.0 with mandatory RAI fields) - Headline scale: 882 paper-included primary-model runs (3 architectures Γ 4 domains Γ 6 main methods + rank/locus sweeps)
- Reproduction in one command:
python scripts/reproduce_paper_tables.py - Code repository (anonymous review): https://anonymous.4open.science/r/tsfm-peft-bench
- Dataset (Hugging Face): EvalData/tsfm-peft-bench β 972 run records + Croissant 1.0
Hosting and accessibility (NeurIPS 2026 D&B Track)
Per the NeurIPS 2026 Evaluations & Datasets hosting policy, this artifact will be hosted at:
| Asset | Platform | Notes |
|---|---|---|
| Code (frozen at submission) | Anonymous-4-Open-Science (review) β Hugging Face Spaces (camera-ready) | All scripts, configs, src/ |
| Run manifest + headline numbers | Hugging Face Datasets tsfm-peft-bench/runs |
paper_manifest.json, paper_numbers.{json,tex}, selector_evaluation.json |
| Per-run JSON files (full grid) | Hugging Face Datasets tsfm-peft-bench/runs |
results/expansion/{domain,rank,locus}/*.json (~50β200 MB total) |
| Domain shift profiles | Hugging Face Datasets tsfm-peft-bench/runs |
domain_shift_profiles.json |
| Croissant metadata | top-level tsfm_peft_bench.croissant.json |
Auto-validated against MLCommons spec 1.0 |
The Croissant file is the canonical machine-readable description of the benchmark. RAI fields (limitations, biases, sensitive information, intended use, social impact, sources, preprocessing, release plan) are populated.
What's in here
src/β model wrappers (Chronos, MOMENT, Moirai, TimesFM), PEFT adaptations (LoRA / DoRA / IAΒ³ / Adapter / Prefix / Head-only / Full-FT), dataset loaders, and evaluation utilities.scripts/β Hydra-based single-run trainer (train.py), full benchmark driver (run_expansion.py), analysis pipeline (analyze_expansion_v2.py,reproduce_paper_tables.py), selector (build_selector.py), and mechanism probes (subspace_probe.py,gradient_probe.py).configs/β Hydra YAML configs for models, adaptations, and data; zero hard-coded hyperparameters.tests/β pytest suite (100 tests) covering data loaders, adaptations, metrics, shift profiles, and analysis utilities.results/β paper artifacts (manifest, ANOVA, selector tables, paper numbers).paper_manifest.jsonis the single source of truth for paper-included runs.paper_submission.tex,paper_appendix.tex,paper_supplementary.tex,neurips_checklist.texβ the manuscript and supplementary materials.
Setup
The repository targets Python 3.10β3.12.
# Recommended (exact reproduction): pin every transitive dep
python -m pip install -r requirements-lock.txt
python -m pip install -e .
# Lighter (looser bounds): top-level constraints only
python -m pip install -e ".[dev]"
requirements-lock.txt is captured by pip freeze from the environment that
produced the released results (PyTorch 2.11, chronos-forecasting 2.2.2,
uni2ts 2.0.0, momentfm @ upstream commit 38f7310a).
TimesFM is optional and conflicts with Python 3.12 (paxml/lingvo dependencies). Install only on Python 3.10/3.11:
python -m pip install -e ".[timesfm]"
GPU: experiments were run on 4Γ RTX 3090 (cluster) and 1β4Γ RTX 3060 nodes. Mixed-precision (FP16/BF16) is always enabled; FP32 is unsupported.
Reproducing the paper
Headline tables
# Re-derive paper_manifest.json + paper_numbers.{json,tex} from raw runs
python scripts/reproduce_paper_tables.py \
--input_dir results/expansion --output_dir results
# ANOVA / outlier filtering / per-architecture statistics (v2)
python scripts/analyze_expansion_v2.py
# Selector evaluation (LOOCV, 12 held-out cells)
python scripts/build_selector.py
These three commands regenerate every numerical claim in
paper_submission.tex from raw run JSON. Latex macros for the headline
numbers are emitted to results/paper_numbers.tex.
Single-run training
python scripts/train.py model=chronos adaptation=lora data=ett_m1
python scripts/train.py model=chronos adaptation=lora data=ett_h1 \
adaptation.rank=16 training.lr=1e-4 training.epochs=50
Full benchmark grid
# 3 primary models Γ 4 domains Γ 7 main methods Γ 5 seeds = 882 paper-included runs
python scripts/run_expansion.py \
--models chronos,moment,moirai \
--mode domain \
--seeds 42,123,7,2024,3407 \
--save_checkpoints \
--checkpoint_dir checkpoints/expansion
scripts/run_benchmark.sh wraps the full sweep across the three modes
(domain / rank / locus).
Evaluation from a checkpoint
python scripts/evaluate.py \
--checkpoint checkpoints/expansion/chronos/<experiment_id>.pt \
--model chronos --data ett_m1
Both train.py and run_expansion.py save checkpoints in a unified schema
(backbone_state_dict + adaptation_method + adaptation_config +
prediction_length / context_length). evaluate.py accepts either
backbone_state_dict or the legacy state_dict key for backward
compatibility with pre-2026-04-27 checkpoints.
Repository conventions
- Docstrings and error messages are written in Korean; identifiers and
paper-facing artifacts are in English. See
CLAUDE.mdfor full coding standards. - All hyperparameters live in
configs/; do not hard-code values in scripts. - External libraries (chronos, peft, transformers, wandb) are imported
dynamically via
importliband accessed throughProtocoltypes insrc/. Direct imports inscripts/andtests/are fine. from __future__ import annotationsat the top of every module.
Data and checkpoints
data/ is .gitignored. Public datasets used:
- ETTm1: standard ETT benchmark (Zhou et al., 2021).
- Exchange-rate ("Finance"): Lai et al., 2017.
- SMD: Server Machine Dataset (Su et al., 2019), entity boundaries
preserved during splitting (
src/data/smd.py). - PhysioNet: subset processed in
src/data/physionet.py, subject IDs preserved across splits to prevent leakage.
checkpoints/ is a local symlink to NAS storage on the maintainer's
machine. Downstream users should either remove the symlink and create a
local directory, or override the path with checkpoint_dir=... /
--checkpoint_dir <path> on the command line.
Layout
src/
βββ adaptation/ # LoRA / DoRA / IA3 / Adapter / Prefix / Head / Full
βββ data/ # ETT, finance, SMD, PhysioNet, shift metrics
βββ evaluation/ # MAE / MSE / MASE / CRPS / CKA
βββ models/ # Chronos / MOMENT / Moirai / TimesFM wrappers
βββ utils/ # Seeds, device, logging
scripts/
βββ train.py # single-run Hydra entry point
βββ run_expansion.py # full benchmark grid
βββ reproduce_paper_tables.py # SoT manifest + paper_numbers regenerator
βββ analyze_expansion_v2.py # ANOVA / outlier policy
βββ build_selector.py # selector LOOCV evaluation
βββ subspace_probe.py # mechanism probe (representation)
βββ gradient_probe.py # mechanism probe (gradient flow)
configs/
βββ model/ chronos.yaml | moment.yaml | moirai.yaml | timesfm.yaml
βββ adaptation/ lora | dora | ia3 | adapter | prefix | head_only | full_ft
βββ data/ ett_m1 | ett_h1 | finance | smd | physionet | ...
results/
βββ paper_manifest.json # SoT: 882 paper-included primary-model runs
βββ paper_numbers.{json,tex} # auto-generated latex macros
βββ selector_evaluation.json # selector LOOCV results
βββ expansion_analysis_canonical/ # canonical ANOVA outputs
βββ expansion_analysis_v3/ # v3 (current) analysis outputs
Quality gates
# Tests (100 tests, ~3s on CPU)
pytest tests/ -v
# Lint / format
ruff check src/ scripts/ tests/
black --check src/ scripts/ tests/
isort --check-only src/ scripts/ tests/
mypy src/
Citation
Anonymous under review. Citation will be added on publication.
License
Apache 2.0 (see LICENSE once added).
- Downloads last month
- 97