File size: 8,449 Bytes
1200576 44feab7 1200576 5e42df5 1200576 fa392dc 1200576 e19b09d 1200576 89d31c3 1200576 89d31c3 1200576 f7af993 e19b09d 89d31c3 1200576 34058b8 1200576 f7af993 e19b09d 1200576 34058b8 8f44e5c 34058b8 3608f0d 34058b8 a1cd857 34058b8 1200576 5c27a04 1200576 32741a0 1200576 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 | ---
license: apache-2.0
datasets:
- THU-KEG/LongWriter-Zero-RLData
base_model:
- Qwen/Qwen2.5-32B
tags:
- reinforcement-learning
- writing
- Long Context
language:
- en
- zh
pipeline_tag: text-generation
library_name: transformers
---
# LongWriter-Zero ✍️ — Mastering Ultra-Long Text Generation via Reinforcement Learning
<p align="center">
🤗 <a href="https://huggingface.co/datasets/THU-KEG/LongWriter-Zero-RLData" target="_blank">HF Dataset</a> • 📃 <a href="https://arxiv.org/abs/2506.18841" target="_blank">Paper</a>
</p>

## 🔍 Table of Contents
- [🚀 LongWriter-Zero](#longwriter_zero)
- [📊 Benchmarks & Evaluation](#evaluation)
- [⚡ Quick Start](#quick_start)
- [📝 Citation](#citation)
<a name="longwriter_zero"></a>
## 🚀 LongWriter-Zero
**LongWriter-Zero** is a *purely reinforcement learning (RL)-based* large language model capable of generating coherent passages exceeding **10,000 tokens**.
Built upon **Qwen 2.5-32B-Base**, the training process includes:
- **30 billion-token continual pretraining** on long-form books and technical reports to enhance fundamental writing capabilities;
- Application of **Group Relative Policy Optimization (GRPO)** with a composite reward function:
- *Length Reward Model (RM)* enforces the desired output length,
- *Writing RM* scores fluency, coherence, and helpfulness,
- *Format RM* ensures strict adherence to the `<think>…</think><answer>…</answer>` structure, and also detects repeated content to avoid redundancy;
- A dedicated prompting strategy that encourages models to *explicitly reflect* before answering, thereby improving structural planning and fine-grained length control.
The resulting model, **LongWriter-Zero-32B**, matches or surpasses the performance of 100B-scale models in ultra-long-form generation.
<a name="evaluation"></a>
## 📊 Benchmarks & Evaluation
LongWriter-Zero’s effectiveness is demonstrated on two fronts: **WritingBench** and **Arena-Write** for automatic scoring and a **human-in-the-loop win-rate study** for pairwise quality comparison.
---
### 📝 WritingBench & Arena-Write Results

> WritingBench (scale 1–10) & Arena-write (Elo) performance of different LLMs .
---
### 🏆 Win-Rate Results

> Donut charts showing win/tie/loss proportions against six baselines (left) and aggregated human evaluation (right).
**Summary:** LongWriter-Zero achieves the highest automatic WritingBench score among open models and secures dominant win-rates in pairwise GPT-4.1 evaluations, confirming its superior quality in ultra-long-form generation while maintaining efficiency.
<a name="quick_start"></a>
## ⚡ Quick Start (HF generate)
```python
import re
model_name = "THU-KEG/LongWriter-Zero-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a 500-word story."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=2048,
temperature=0.6,
do_sample=True,
stop_strings=["<|user|>", "<|endoftext|>", "</answer>"],
tokenizer=tokenizer
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
*Note: We use a slightly different tokenizer and chat template compared to the original Qwen2.5-32B-Instruct model.*
## ⚡ Quick Start (SGlang)
The snippet below shows how to format prompts with LongWriter-Zero’s `<think> … </think><answer> … </answer>` protocol and call the model through an SGlang-powered endpoint supporting streaming responses.
```python
import json, requests, re
def format_prompt_with_template(prompt):
base_format_zn = r"用户与助手之间的对话。用户提供一个写作/通用任务,助手完成它。助手首先在脑海中深入思考写作/回答过程,然后向用户提供最终的书面作品。助手应进行全面而深入的规划,确保写作/通用任务的每个方面都详细且结构合理。如果写作要求存在任何不确定性或歧义,助手应反思,向自己提出澄清性问题,并探索多种写作方式,以确保最终作品达到最高质量标准。由于写作是一个既富有创造性又需要结构性的任务,助手应从多个角度进行分析,考虑连贯性、清晰度、风格、语气、受众和目的,等等因素。此外,助手还应对作品进行审查和优化,以增强其表达效果。写作思考过程和最终的书面作品分别用 <think> </think> 和 <answer> </answer> 标签包裹,如下所示:<think>详细的写作规划和结构设计,可能包括头脑风暴、大纲制定、风格选择、受众适配、反思以及质量检查等等。</think> <answer>经过充分优化和润色的最终书面作品。</answer> <|用户|>: {question} <|助手|>:"
base_format_en = r"A conversation between the user and the assistant. The user provides a writing/general task, and the assistant completes it. The assistant first deeply thinks through the writing/answering process in their mind before providing the final written work to the user. The assistant should engage in comprehensive and in-depth planning to ensure that every aspect of the writing/general task is detailed and well-structured. If there is any uncertainty or ambiguity in the writing request, the assistant should reflect, ask themselves clarifying questions, and explore multiple writing approaches to ensure the final output meets the highest quality standards. Since writing is both a creative and structured task, the assistant should analyze it from multiple perspectives, considering coherence, clarity, style, tone, audience, purpose, etc.. Additionally, the assistant should review and refine the work to enhance its expressiveness. The writing thought process and the final written work should be enclosed within <think> </think> and <answer> </answer> tags, respectively, as shown below: <think>A comprehensive strategy for writing that encompasses detailed planning and structural design—including brainstorming, outlining, style selection, audience adaptation, self-reflection, quality assurance, etc..</think> <answer>The final written work after thorough optimization and refinement.</answer> <|user|>: {question} <|assistant|>:"
base_format = base_format_zn if re.search(r'[\u4e00-\u9fff]', prompt) else base_format_en
formatted_prompt = base_format.format(question=prompt)
return formatted_prompt
prompt = "XXXX" # ← replace with your writing task
data = {
"model": "LongWriter-Zero-32B",
"prompt": format_prompt_with_template(prompt),
"temperature": 0.6,
"top_p": 0.95,
"max_tokens": 15500,
"stop": ["<|user|>", "<|endoftext|>", "</answer>"],
"stream": True,
}
# SGlang Gateway (example)
response = requests.post(
"http://XXXX:9999/v1/completions", # ← replace with your IP
json=data,
headers={"Content-Type": "application/json"},
timeout=1200,
stream=True,
)
for chunk in response.iter_lines():
if chunk and chunk.startswith(b"data:"):
if chunk == b"data: [DONE]":
break
payload = json.loads(chunk[5:])
print(payload["choices"][0]["text"], end="", flush=True)
```
<a name="citation"></a>
## 📝 Citation
```bibtex
@misc{wu2025longwriterzeromasteringultralongtext,
title={LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning},
author={Yuhao Wu and Yushi Bai and Zhiqiang Hu and Roy Ka-Wei Lee and Juanzi Li},
year={2025},
eprint={2506.18841},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.18841},
} |