π TimeOmni-1-7B: Generalized Time Series Reasoning Model
"We present TimeOmni-1, the first generalized, unified model for time series reasoning. It first injects temporal priors through supervised fine-tuning. Then, reinforcement learning with task-grounded rewards guides the model beyond mimicking priors toward robust reasoning. Experiments show that TimeOmni-1 achieves top-tier performance while preserving the general reasoning ability of the base model. Finally, we demonstrate that joint training across diverse reasoning tasks yields mutual gains, supporting a βtrain-once, use-across-tasksβ paradigm for future time series reasoning models."
π¨ Task Illustration
π§ Method
TimeOmni-1 is a generalized reasoning model for time series. Pretrained LLMs often lack temporal priors because they are rarely exposed to time series during pretraining. To address this, we use a two-stage training pipeline: (1) supervised fine-tuning (SFT) to inject temporal priors and anchor the model in a temporal knowledge space, and (2) reinforcement learning (RL) with task-grounded rewards (see Reward Evaluation in the figure above) to improve robustness and reasoning quality.
π Benchmarks
Note: All metrics below are computed only on valid responses. βββ indicates a success rate (SR) below 10%; in such cases, results are omitted due to insufficient statistical significance, and we therefore do not report them.
| Task1 ID (ACCβ/SR) | Task1 OOD (ACCβ/SR) | Task2 ID (ACCβ/SR) | Task2 OOD (ACCβ/SR) | Task3 ID (MAEβ/SR) | Task3 OOD (MAEβ/SR) | Task4 ID (ACCβ/SR) | Task4 OOD (ACCβ/SR) | |
|---|---|---|---|---|---|---|---|---|
| Time Series Language Model | ||||||||
| Time-MQA Llama3-8B | 32.2/- | 29.5/1.4 | 25.1/- | 32.6/0.4 | 30.1/12.0 | 44.3/13.3 | 31.2/11.6 | 37.2/15.8 |
| Time-MQA Mistral-7B-v0.3 | 15.1/- | 21.5/0.2 | 27.8/- | 22.1/0.0 | 8.4/5.4 | 50.2/36.1 | 4.0/10.0 | 52.2/47.3 |
| Time-MQA Qwen2.5-7B | 25.0/19.76 | 14.0/12.2 | 37.5/- | 22.7/6.5 | 29.5/23.8 | 33.0/58.0 | 30.5/26.4 | 32.0/44.3 |
| ChatTS | -/- | 6.0/0.0 | -/- | 6.9/0.0 | 18.2/5.8 | 30.1/27.1 | 18.6/11.1 | 26.7/27.1 |
| ChatTime-7B-Chat | 18.2/11.0 | 29.8/12.7 | -/- | -/- | 14.47/100.0 | 154.55/100.0 | -/0.0 | -/0.0 |
| ITFormer-7B | 43.8/100.0 | 47.5/100.0 | 15.0/47.0 | 14.6/42.0 | 29.55/96.0 | 230.04/100.0 | 25.0/100.0 | 41.7/100.0 |
| OpenTSLM-llama-3.2-3b-ecg-flamingo | -/5.0 | -/3.2 | 1.6/23.0 | 3.3/26.5 | -/0.2 | -/0.0 | 17.8/98.4 | 16.2/98.9 |
| Time Series Reasoning Model | ||||||||
| Time-R1 | 30.9/94.0 | 34.0/92.5 | 30.2/53.8 | 31.4/48.9 | 17.61/38.7 | -/6.3 | 27.8/95.7 | 32.2/93.1 |
| Ours | ||||||||
| TimeOmni-1 | 90.7/97.5 | 87.7/98.3 | 69.3/99.8 | 64.0/99.8 | 14.30/93.8 | 145.53/82.3 | 47.9/100.0 | 58.9/100.0 |
π Usage
This repository hosts the model weights for TimeOmni-1. For installation, usage instructions, and further documentation, please visit our GitHub repository.
License
TimeOmni-1 is licensed under the Apache 2.0 license. It is finetuned from Qwen2.5-7B-Instruct under Apache 2.0.
βοΈ Citation
@article{guan2025timeomni,
title={Timeomni-1: Incentivizing complex reasoning with time series in large language models},
author={Guan, Tong and Meng, Zijie and Li, Dianqi and Wang, Shiyu and Yang, Chao-Han Huck and Wen, Qingsong and Liu, Zuozhu and Siniscalchi, Sabato Marco and Jin, Ming and Pan, Shirui},
journal={arXiv preprint arXiv:2509.24803},
year={2025}
}
- Downloads last month
- 53