File size: 2,964 Bytes
3f0c5db 31aa1de e6ad1de ae1fe59 fcd6e71 e6ad1de fcd6e71 786a2f0 2a79dfd af72c0f ebdf126 af72c0f 2a79dfd 6b6d0ac 2a79dfd 2bffbf9 af72c0f 013d0d0 af72c0f dd57814 c673cfa dd57814 e117b5c dd57814 2a79dfd e5232d0 2a79dfd fcd6e71 c673cfa dd57814 786a2f0 9f6e4e4 786a2f0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
datasets:
- openai/gsm8k
- HuggingFaceH4/MATH-500
- HuggingFaceH4/aime_2024
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-generation
---
## MOTIF: Modular Thinking via Reinforcement Fine-tuning in LLMs
๐ Paper link: [Arxiv preprint](https://arxiv.org/abs/2507.02851)
๐ Github link: [Training and evaluation code](https://github.com/purbeshmitra/MOTIF)
๐ Link to the trained models: [Hugging Face collection](https://huggingface.co/collections/purbeshmitra/motif-paper-models-686a2f36407bb88f750eef75)
- **Algorithm**: MOTIF
- **Training data**: [GSM8K](https://huggingface.co/datasets/openai/gsm8k)
- **Base model**: [unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct-bnb-4bit)
The INFTYTHINK architecture, shown below, allows multi-round thinking for extended LLM reasoning beyond its context size.
<p align="center">
<img src="assets/multiround.png" alt="Alt Text" width="750">
</p>
In this work, we propose a GRPO based training method for such a system that allows to calculate the accuracy reward by rolling out trajectories and applying the reward at the first round of inference outcomes. This is depicted as following:
<p align="center">
<img src="assets/multiround_grpo.png" alt="Alt Text" width="750">
</p>
## Results
Our results are shown below:
<p align="center">
<img src="assets/motif_results.png" alt="Alt Text" width="750">
</p>
## Usage
```python
from transformers import AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit")
model = PeftModel.from_pretrained(base_model, "purbeshmitra/MOTIF")
SYSTEM_PROMPT = "You are a helpful assistant. When the user asks a question, you solve it in 3 rounds. In each round, you first think about the reasoning process of answering and then provide the user with a detailed progress about it. The reasoning process and the progress are enclosed within <reasoning> </reasoning> and <answer> </answer> tags, respectively. Therefore, you follow the strict format:
<reasoning> reasoning process here </reasoning> <answer> detailed progress here </answer>
The User provides this detailed progress as additional context in the next round. You then respond again with further thinking and further progress. When the User says that the current round is the final (third) round, you provide an answer inside the answer tags. You also enclose a final answer in third round in the box: \\boxed{}. Only this boxed final answer is used for evaluation."
```
## Citation
If you find our work useful, consider citing it as:
```bibtex
@article{mitra2025motif,
title={MOTIF: Modular Thinking via Reinforcement Fine-tuning in LLMs},
author={Mitra, Purbesh and Ulukus, Sennur},
journal={arXiv preprint arXiv:2507.02851},
year={2025}
}
``` |