File size: 5,022 Bytes
d6c04b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
license: cc-by-nc-4.0
task_categories:
  - question-answering
  - multiple-choice
language:
  - en
tags:
  - time-series
  - diagnostic-reasoning
  - sensor-data
  - multivariate
  - anomaly-detection
  - industrial
pretty_name: SenTSR-Bench
size_categories:
  - n<1K
---

# SenTSR-Bench: A Real-World Multivariate Time-Series Diagnostic Reasoning Benchmark

SenTSR-Bench is a de-identified, real-world multivariate time-series benchmark for diagnostic reasoning, released as part of the paper:

> **SenTSR-Bench: Thinking with Injected Knowledge for Time-Series Reasoning**
> Zelin He, Boran Han, Xiyuan Zhang, Shuai Zhang, Haotian Lin, Qi Zhu, Haoyang Fang, Danielle C. Maddix, Abdul Fatir Ansari, Akash Chandrayan, Abhinav Pradhan, Bernie Wang, Matthew Reimherr
> *AISTATS 2026*

**Project Page:** [https://zlhe0.github.io/SenTSR-Bench-Website/](https://zlhe0.github.io/SenTSR-Bench-Website/)

## Overview

SenTSR-Bench contains **110 multivariate time series** paired with **330 human-curated diagnostic multiple-choice questions**, collected from real-world warehouse machine monitoring systems. Each time series records three sensor channels — **Acceleration**, **Velocity**, and **Temperature** — sampled hourly.

Unlike existing benchmarks that rely on LLM-generated annotations or synthetic data, SenTSR-Bench provides **human-verified diagnostic annotations** grounded in real sensor signals, with a **multi-stage question structure** that reflects the reasoning depth required in real-world maintenance scenarios.

## Question Types

Each time series is paired with three progressively harder diagnostic questions:

| Stage | Question Type | Description |
|-------|--------------|-------------|
| 1 | **What Happened** | Identify the key anomalous pattern in the time series |
| 2 | **How Happened** | Infer the most likely root cause behind the observed anomaly |
| 3 | **Suggested Fix** | Propose the best corrective action for the diagnosed issue |

## Dataset Construction

The benchmark was constructed through a three-stage curation pipeline:

1. **Signal selection and preprocessing:** 110 multivariate sensor streams were filtered from an initial pool of over 2,000 candidates, selecting those with clear anomalous patterns tied to actionable diagnostic contexts. All signals were standardized and fully de-identified.
2. **Human annotation:** Domain experts annotated anomalous windows with descriptions of observed patterns, plausible root causes, and candidate corrective actions using only sanitized time-series segments.
3. **Evaluation query construction:** Multiple-choice questions were generated following the multi-stage structure. Ground-truth answers are paired with distractors sampled from other anomaly-type clusters, ensuring that solving each task requires both correct recognition and reasoning.

## Data Format

The dataset is provided as a JSON file (`SenTSR-Bench_evaluation.json`) containing a list of 330 entries. Each entry has the following fields:

| Field | Type | Description |
|-------|------|-------------|
| `timeseries` | `list[list[float]]` | 3 channels (Acceleration, Velocity, Temperature), each a list of hourly readings |
| `cols` | `list[str]` | Channel names: `["Acceleration", "Velocity", "Temperature"]` |
| `question` | `str` | Full question text including preamble and the diagnostic query |
| `question_type` | `str` | One of `what_happened`, `how_happened`, `suggested_fix` |
| `answer` | `str` | The correct answer text |
| `options` | `list[str]` | Four multiple-choice options |
| `correct_index` | `int` | Index (0-based) of the correct option in the `options` list |
| `attributes` | `list[str]` | Ground-truth diagnostic attributes |
| `ability_types` | `list[str]` | Reasoning ability type (`MCQ_obs`, `MCQ_cause`, or `MCQ_fix`) |

Entries are grouped in consecutive triplets: entries 0-2 share the same time series (with one question per type), entries 3-5 share the next time series, and so on.

## Usage

```python
import json

with open("SenTSR-Bench_evaluation.json") as f:
    data = json.load(f)

# Each group of 3 consecutive entries shares the same time series
for i in range(0, len(data), 3):
    ts = data[i]["timeseries"]  # 3 channels
    cols = data[i]["cols"]       # ["Acceleration", "Velocity", "Temperature"]

    for j in range(3):
        entry = data[i + j]
        print(f"Type: {entry['question_type']}")
        print(f"Answer: {entry['options'][entry['correct_index']]}")
```

## Citation

```bibtex
@inproceedings{He2026SenTSRBench,
  title     = {SenTSR-Bench: Thinking with Injected Knowledge for Time-Series Reasoning},
  author    = {He, Zelin and Han, Boran and Zhang, Xiyuan and Zhang, Shuai and Lin, Haotian and Zhu, Qi and Fang, Haoyang and Maddix, Danielle C. and Ansari, Abdul Fatir and Chandrayan, Akash and Pradhan, Abhinav and Wang, Bernie and Reimherr, Matthew},
  booktitle = {Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)},
  year      = {2026}
}
```