m-serious commited on
Commit
eb4c230
·
verified ·
1 Parent(s): 4df8e2b

Update README.md

Browse files

<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/65d188a4aa309d842e438ef1/UGKaze7CvjVvTrbDowgTZ.png" alt="Output Examples" width="600">
</center>

<div align="center">
<a href="https://huggingface.co/datasets/ulab-ai/Time-Bench"> 📊 <strong>Dataset</strong></a> | <a href="https://github.com/ulab-uiuc/Time-R1">🚀 <strong>Code</strong></a> | <a href="https://arxiv.org/abs/2505.13508">📖 <strong>Paper</strong></a>
</div>

# Time-R1 Model Series

This collection hosts the official checkpoints for the **Time-R1** model, as described in the paper "Time-R1: Towards Comprehensive Temporal Reasoning in LLMs". Time-R1 is a 3B parameter Large Language Model trained with a novel three-stage reinforcement learning curriculum to endow it with comprehensive temporal abilities: understanding, prediction, and creative generation.

These models are trained using the [Time-Bench dataset](https://huggingface.co/datasets/ulab-ai/Time-Bench).

## Model Checkpoints

We provide several checkpoints representing different stages of the Time-R1 training process:

### Stage 1: Temporal Comprehension Models

These models are trained to develop foundational temporal understanding.

* **[Time-R1-S1P1](https://huggingface.co/ulab-ai/Time-R1-S1P1):** Checkpoint after Phase 1 of Stage 1 training.
* *Focus: Foundational logic on easy timestamp inference tasks.*
* **[Time-R1-S1P2](https://huggingface.co/ulab-ai/Time-R1-S1P2):** Checkpoint after Phase 2 of Stage 1 training.
* *Focus: Full task exploration on all Stage 1 subtasks with mixed difficulty.*
* **[Time-R1-Theta1](https://huggingface.co/ulab-ai/Time-R1-Theta1):** Checkpoint $\theta_1$, after Phase 3 (full Stage 1 training).
* *Focus: Refined precision on all Stage 1 subtasks under stricter evaluation.*
* **[Time-R1-Theta1_prime](https://huggingface.co/ulab-ai/Time-R1-Theta1_prime):** Ablation model $\theta_1'$, trained for Stage 1 without the dynamic reward design.
* *Focus: Serves as a baseline to evaluate the efficacy of the dynamic reward curriculum.*

### Stage 2: Future Event Time Prediction Model

This model builds upon Stage 1 capabilities to predict future event timings.

* **[Time-R1-Theta2](https://huggingface.co/ulab-ai/Time-R1-Theta2):** Checkpoint $\theta_2$, after Stage 2 training.
* *Focus: Predicting the timing of future events occurring after its initial knowledge cutoff.*

Please refer to the [main paper](https://arxiv.org/abs/2505.13508) for detailed discussions on the architecture, training methodology, and comprehensive evaluations.

## How to Use

For loading and using these models, please refer to the example scripts and documentation provided in our [GitHub repository](https://github.com/ulab-uiuc/Time-R1).
Typically, you can load the models using the Hugging Face `transformers` library:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

# Example for one of the models (replace with the specific model name)
model_name = "ulab-ai/Time-R1-Theta1" # Or your specific Hugging Face model path
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Further usage instructions would go here or in the repository
```

## Citations
```bibtex


@article
{liu2025time,
title={Time-R1: Towards Comprehensive Temporal Reasoning in LLMs},
author={Liu, Zijia and Han, Peixuan and Yu, Haofei and Li, Haoru and You, Jiaxuan},
journal={arXiv preprint arXiv:2505.13508},
year={2025}
}

Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -1,3 +1,15 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - ulab-ai/Time-Bench
5
+ base_model:
6
+ - Qwen/Qwen2.5-3B-Instruct
7
+ tags:
8
+ - temporal-reasoning
9
+ - reinforcement-learning
10
+ - large-language-models
11
+ paperswithcode:
12
+ arxiv_id: 2505.13508
13
+ model_index:
14
+ - name: Time-R1-S1P1
15
+ ---