An Open Foundation Model and Benchmark to Accelerate Generative Recommendation
## 📖 Introduction
**OpenOneRec** is an open-source framework designed to bridge the gap between traditional recommendation systems and Large Language Models (LLMs). While Generative Recommendation has shown promise, existing models often struggle with isolated data silos and a lack of reasoning capabilities.
To address this, we introduce a unified framework that comprises:
* **RecIF-Bench**: The first holistic Recommendation Instruction-Following Benchmark, containing **100M interactions** from 200k users across heterogeneous domains (Short Video, Ads, Product).
* **OneRec-Foundation Models**: A family of models (1.7B & 8B) built on the Qwen backbone. These models are trained on hundreds of billions of tokens, integrating collaborative signals with general semantics.
* **Full-Stack Pipeline**: We open-source our comprehensive training pipeline, including data processing, co-pretraining, and post-training, to ensure full reproducibility and facilitate scaling law research in recommendation.
## 🔥 News
* **[2026.1.1]** 🎉 **OneRec-Foundation** models (1.7B, 8B) are now available on Hugging Face!
* **[2026.1.1]** 📑 The technical report has been released.
* **[2026.1.1]** 🚀 **RecIF-Bench** dataset and evaluation scripts are open-sourced.
## 📊 RecIF-Bench
We propose **RecIF-Bench** to rigorously assess the synergy between instruction following and domain-specific recommendation. It organizes 8 distinct tasks into a four-layer capability hierarchy:
* **Layer 0: Semantic Alignment** (Item Understanding)
* **Layer 1: Fundamental Prediction** (Short Video Rec, Ad Rec, Product Rec, Label Prediction)
* **Layer 2: Instruction Following** (Interactive Rec, Label-Conditional Rec)
* **Layer 3: Reasoning** (Recommendation Explanation)
The benchmark aggregates data from three domains: **Short Video** (Content), **Ads** (Commercial), and **Product** (E-commerce).
## 🤖 Model Zoo
The OpenOneRec-Foundation series is built upon the Qwen architecture, enhanced with **Itemic Tokens** for modality alignment and trained via a multi-stage protocol.
| Model | Backbone | Parameters | Description | Link |
| :--- | :--- | :--- | :--- | :--- |
| **OneRec-Open-1.7B** | Qwen3-1.7B | 1.7B | Standard version trained on open-source data (~33B tokens) | [HuggingFace](https://huggingface.co/OpenOneRec/OneRec-1.7B) |
| **OneRec-Open-8B** | Qwen3-8B | 8B | Standard version trained on open-source data (~33B tokens) | [HuggingFace](https://huggingface.co/OpenOneRec/OneRec-8B) |
| **OneRec-Pro-1.7B** | Qwen3-1.7B | 1.7B | Scaled-up version with expanded datasets (~130B tokens) | [HuggingFace](https://huggingface.co/OpenOneRec/OneRec-1.7B-pro) |
| **OneRec-Pro-8B** | Qwen3-8B | 8B | Scaled-up version with expanded datasets (~130B tokens) | [HuggingFace](https://huggingface.co/OpenOneRec/OneRec-8B-pro) |
## 🏗️ Method & Architecture
OpenOneRec reframes recommendation as a general-purpose sequence modeling paradigm.
### 1. Items as Tokens
To bridge the modality gap, we treat items as a distinct modality using **Itemic Tokens** derived from hierarchical vector quantization. This allows the LLM to process interaction history as a cohesive context sequence.
### 2. Training Pipeline
Our framework utilizes the following recipe:
* **Pre-Training**: Integrates collaborative signals via Itemic-Text Alignment and Full-Parameter Co-Pretraining.
* **Post-Training**:
* *Stage 1*: Multi-task Supervised Fine-tuning for basic instruction following.
* *Stage 2*: On-policy Distillation to restore general reasoning performance.
* *Stage 3*: Reinforcement Learning to enhance recommendation capabilities.