File size: 1,406 Bytes
4e6b7dc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
license: other
library_name: transformers
tags:
- reasoning
- context-learning
- synthetic-data
- transformers
---

# Interplay-LM Context Pretrain Models

This repository is organized by context-mixture setting. Each top-level directory corresponds to one pretraining setting used in the context experiments.

Within each setting:

- `base/` stores the final pretraining checkpoint used to initialize RL.
- `rl/` stores the final RL checkpoints for each experiment variant.

Only inference-relevant Hugging Face files are included.

## Included settings

- `idzoo_0.9zoo_0.1teacher`
- `idzoo_0.99zoo_0.01teacher`
- `idzoo_0.999zoo_0.001teacher`

## Load

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

repo_id = "Interplay-LM-Reasoning/context_pretrain"
subdir = "idzoo_0.99zoo_0.01teacher/rl/contextzoo_0.99zoo_0.01teacher_process_strict"

tokenizer = AutoTokenizer.from_pretrained(repo_id, subfolder=subdir)
model = AutoModelForCausalLM.from_pretrained(repo_id, subfolder=subdir)
```

## Citation

```bibtex
@misc{zhang2025interplaypretrainingmidtrainingrl,
      title={On the Interplay of Pre-Training, Mid-Training, and RL on Reasoning Language Models},
      author={Charlie Zhang and Graham Neubig and Xiang Yue},
      year={2025},
      eprint={2512.07783},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2512.07783},
}
```