SciTS / README.md
lwLiu's picture
Update README.md
72dba5c verified
metadata
configs:
  - config_name: default
    data_files:
      - split: test
        path: meta_data.jsonl
license: cc-by-nc-sa-4.0
task_categories:
  - time-series-forecasting
  - question-answering
language:
  - en
tags:
  - time series
  - timeseries
  - audio
  - benchmark
  - time series Reasoning
  - time series Classification
  - time series QA
  - time series Anomaly Detection
  - classification
  - anomaly detection
size_categories:
  - 10K<n<100K
pretty_name: 'SciTS: Scientific Time Series Understanding and Generation with LLMs'

SciTS: Scientific Time Series Understanding and Generation with LLMs

This repository contains the official dataset for SciTS: Scientific Time Series Understanding and Generation with LLMs (ICLR 2026). SciTS is a large-scale benchmark designed to evaluate the capabilities of large language models on complex scientific time series data. It spans 12 scientific disciplines, 43 distinct tasks, and includes 54,023 instances.

SciTS

Dataset Structure

The benchmark is organized into a main meta_data.jsonl file, a process directory for handling restricted datasets, and 38 individual dataset folders. Each folder is named using the convention: Domain-DatasetName-Scene-Task.

├── process/
│   ├── process_ETT.py
│   ├── process_iNaturalist.py
│   ├── infer_template.py
│   ├── eval.py
│   └── requirements.txt
├── Domain-DatasetName-Scene-Task_1/
│   ├── raw_input_data/
│   └── raw_gt_data/ (for generation tasks)
├── Domain-DatasetName-Scene-Task_2/
│   └── raw_input_data/
...
├── Domain-DatasetName-Scene-Task_38/
│   ├── raw_input_data/
│   └── raw_gt_data/
└── meta_data.jsonl
  • process/: Contains utility scripts, including process_ETT.py and process_iNaturalist.py for processing restricted datasets which cannot be released directly due to license restrictions, infer_template.py as an inference template, eval.py for evaluation, and requirements.txt for dependency installation.
  • Dataset Folders: Each of the 38 folders contains the raw time series data for a specific dataset. raw_input_data holds the input signals, while raw_gt_data (present only for generation tasks) holds the ground truth output signals.
  • meta_data.jsonl: A JSON Lines file containing metadata for every instance in the benchmark. Each line corresponds to one data sample.

Dataset Collection

The 38 released datasets are listed below:

Domain Dataset Folder Name Task ID
Astronomy Astronomy-GWOSC_GW_Event-Gravitational_wave-Anomaly_detection+Event_localisation ASU01, ASG02
Astronomy-LEAVES-Light_curve-Classification ASU03
Earth Science Earth_Science-STEAD-Earthquake-Anomaly_detection+Event_localisation EAU01, EAG02
Bioacoustics Bioacoustics-Powdermill-Birds_vocalisation-Classification BIU01
Bioacoustics-MarmAudio-Marmoset_vocalisation-Classification BIU03
Meteorology Meteorology-TS_MQA-Weather-Anomaly_detection MEU01
Meteorology-TIMECAP-Rainfall-Anomaly_detection MEU02
Meteorology-MT_bench-Temperature-Forecasting MEG03
Meteorology-MT_bench-Temperature-MCQ MEU04
Economics Economics-FinMultiTime-Stock_closing_price-Forecasting ECG01
Economics-MT_bench-Stock_price-Forecasting ECG02
Economics-MT_bench-Stock-MCQ ECU03
Neuroscience Neuroscience-MDD-Depressive_disorder-Anomaly_detection NEU01
Neuroscience-TUEV-EEG_pattern-Classification NEU02
Neuroscience-TS_MQA-EEG_signal-Forecasting NEG03
Neuroscience-TS_MQA-EEG_signal-Imputation NEG04
Neuroscience-WBCIC_SHU-Motor_imagery-Classification NEU05
Neuroscience-Sleep-Sleep_staging-Classification NEU06
Energy Energy-NewsForecast-Electronic_load-Forecasting ENG01
Energy-TextETT-Sensor_signal_trend-Synthesis ENG03
Energy-TS_MQA-Comprehensive_electricity-Forecasting ENG04
Energy-TS_MQA-Comprehensive_electricity-Imputation ENG05
Physiology Physiology-PTB_XL-ECG_status-Classification PHU01
Physiology-TS_MQA-Physiological_signal-Forecasting PHG02
Physiology-TS_MQA-Physiological_signal-Imputation PHG03
Physiology-TS_MQA-ECG-Anomaly_detection PHU04
Physiology-TS_MQA-Gait_freezing-Anomaly_detection PHU05
Physiology-TS_MQA-Human_activity-Classification PHU06
Urbanism Urbanism-NewsForecast-Traffic_flow-Forecasting URG01
Urbanism-TS_MQA-Pedestrian_flow-Forecasting URG02
Urbanism-TS_MQA-Pedestrian_flow-Imputation URG03
Urbanism-TS_MQA-Traffic_flow-Anomaly_detection URU04
Urbanism-MetroTraffic-Traffic_volume-Forecasting URG05
Manufacturing Manufacturing-CWRU-Bearings_fault_location+Bearings_fault_size-Classification MFU01, MFU02
Manufacturing-MIMII_Due-Machine_malfunction-Anomaly_detection MFU03
Radar Radar-RadSeg-Coding_scheme-Classification RAU01
Radar-RadarCom-Modes_and_modulation-Classification RAU02
Math Math-Chaotic-Chaotic_system-Forecasting MAG01

meta_data.jsonl Format

Each line in this file is a JSON object with the following structure, providing all necessary metadata to load and use a data sample.

{
    "task_id": ["TASK_ID"], // List of task IDs associated with this sample (e.g., ["ASU03"] or ["ASU01", "ASG02"] for merged datasets)
    "id": "DATASET_ID", // Unique identifier of this sample within the dataset
    "data_type": "csv"/"npy"/"wav"/"flac", // File format of the raw time series data
    "input_ts":{
        "num_channel": int, // Number of channels (dimensions) in the input signal
        "channel_detail": [], // List of channel names, empty if none
        "path": "raw_input_data/sample_001_input.npy",
        "length": int, // Length of the input time series
        "timestamps": [], // Auxiliary timestamp information, empty if none
        "fs": int // Sampling frequency in Hz
    },
    "input_text": "INPUT_TEXT", // Textual prompt or task instruction provided as input
    "gt_text": "GT_TEXT", // Ground truth textual answer (for understanding tasks; empty for generation tasks)
    "gt_ts": {
        "path": "raw_gt_data/sample_001_output.npy",
        "length": int // Length of the ground truth time series
    },
    "gt_result": { ... }, // Structured ground truth result; format varies by task type (see below)
    "meta_data": {}  // Additional metadata from the original data source
}

gt_result Field Format

The structure of the gt_result field varies depending on the task type. This field provides the original ground truth for metric computation.

1. MCQ

"gt_result": {
    "answer": "TEXT" // The correct textual answer
}

2. Synthesis, Forecasting, Imputation

"gt_result": {
    "num_channel": int, // Number of channels (dimensions) in the ground truth signal
    "channel_detail": [], // List of channel names, empty if none
    "timestamps": [] // Auxiliary timestamp information, empty if none
}

3. Classification

For the CWRU dataset, which involves two classification sub-tasks, the category keys in class_list and gt_class are "diameter" and "position" respectively. For all other classification tasks, the category key is "default".

"gt_result": {
    "class_list": {
        "default": ["class_A", "class_B"], // List of candidate classes for each category
        ...
    },
    "gt_class": {
        "default": ["GT_CLASS"], // Ground truth class label for each category
        ...
    }
}

4. Anomaly Detection

"gt_result": {
    "contain": Boolean // Boolean indicating if the required event is present
}

5. Anomaly Detection + Event Localisation

For the GWOSC GW Event and STEAD datasets, each of which includes both an Anomaly Detection task and an Event Localisation task, the gt_result field is defined in the following combined format:

"gt_result": {
    "contain": Boolean, // Boolean indicating if the required event is present
    "start_time": int // The event index if contain is true, else null
}

Handling Restricted Datasets

Due to license restrictions, the ETT (ENG02) and iNaturalist (BIU02) datasets are not directly included in this repository. To use them, the user need to download the original data and run the provided processing scripts.

Step 1: Download the Data

Step 2: Install Dependencies

Before running the processing scripts, install the required Python packages:

pip install -r process/requirements.txt

Step 3: Run the Processing Script

Place the downloaded files into a local directory. Then, from the root of this repository, run the corresponding script to process the data into the standard benchmark format.

  • For ETT:

    python process/process_ETT.py --data_path /path/to/your/ETTh1.csv
    
  • For iNaturalist:

    python process/process_iNaturalist.py --data_folder /path/to/your/iNaturalist/test
    

This will generate the Energy-ETT-Transformer_sensor_signal-Forecasting and Bioacoustics-INaturalist-Animal_vocalisation-Classification folders along with their raw_input_data, raw_gt_data subdirectories, as well as the processed test files.

Baseline Inference and Evaluation

The process directory also includes scripts for running inference and evaluating the results.

Inference

process/infer_template.py: Template code for the inference script. Implement the initialize_model function, then inference can be done by running:

python process/infer_template.py --scits_dir /path/to/scits_dir --output_dir /path/to/output_dir

Evaluation

process/eval.py: Evaluation script. Run:

python process/eval.py evaluate --infer_dir /path/to/infer_dir

The evaluation results will be saved to /path/to/infer_dir/results/.

Citation

If you use the SciTS benchmark, please cite the paper:

@inproceedings{
    wu2026scits,
    title={Sci{TS}: {S}cientific Time Series Understanding and Generation with {LLM}s},
    author={Wen Wu and Ziyang Zhang and Liwei Liu and Xuenan Xu and Jimin Zhuang and Ke Fan and Qitan Lv and Junlin Liu and Chen Zhang and Zheqi Yuan and Siyuan Hou and Tianyi Lin and Kai Chen and Bowen Zhou and Chao Zhang},
    booktitle={The Fourteenth International Conference on Learning Representations},
    year={2026},
    url={https://openreview.net/forum?id=5YXccEP6uc}
}