OpenOneRec commited on
Commit
b254947
Β·
verified Β·
1 Parent(s): f956faa

Upload 2 files

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +126 -3
  3. assets/framework.png +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ assets/framework.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,126 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+ <h1>OpenOneRec</h1>
3
+ <p align="center">
4
+ <strong>An Open Foundation Model and Benchmark to Accelerate Generative Recommendation</strong>
5
+ </p>
6
+
7
+ <p align="center">
8
+ <a href="https://huggingface.co/Onerec">
9
+ <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-OneRec-ffc107?color=ffc107&logoColor=white" />
10
+ </a>
11
+ <a href="https://github.com/OnerecLM/OpenOneRec">
12
+ <img alt="GitHub Code" src="https://img.shields.io/badge/GitHub-OpenOneRec-black?logo=github" />
13
+ </a>
14
+ <a href="">
15
+ <img alt="Paper" src="https://img.shields.io/badge/Paper-ArXiv-b31b1b?logo=arxiv" />
16
+ </a>
17
+ <a href="#license">
18
+ <img alt="License" src="https://img.shields.io/badge/License-Apache%202.0-green" />
19
+ </a>
20
+ </p>
21
+ </div>
22
+
23
+ <br>
24
+
25
+ ## πŸ“– Introduction
26
+
27
+ **OpenOneRec** is an open-source framework designed to bridge the gap between traditional recommendation systems and Large Language Models (LLMs). While Generative Recommendation has shown promise, existing models often struggle with isolated data silos and a lack of reasoning capabilities.
28
+
29
+ To address this, we introduce a unified framework that comprises:
30
+ * **RecIF-Bench**: The first holistic Recommendation Instruction-Following Benchmark, containing **100M interactions** from 200k users across heterogeneous domains (Short Video, Ads, Goods).
31
+ * **OpenOneRec-Foundation Models**: A family of models (1.7B & 8B) built on the Qwen backbone. These models are trained on hundreds of billions of tokens, integrating collaborative signals with general semantics.
32
+ * **Full-Stack Pipeline**: We open-source our comprehensive training pipeline, including data processing, co-pretraining, and post-training, to ensure full reproducibility and facilitate scaling law research in recommendation.
33
+
34
+ ## πŸ”₯ News
35
+
36
+ * **[2025.xx.xx]** πŸŽ‰ **OpenOneRec-Foundation** models (1.7B, 8B) are now available on Hugging Face!
37
+ * **[2025.xx.xx]** πŸ“‘ The technical report [OpenOneRec Technical Report] has been released.
38
+ * **[2025.xx.xx]** πŸš€ **RecIF-Bench** dataset and evaluation scripts are open-sourced.
39
+
40
+ ## πŸ“Š RecIF-Bench
41
+
42
+ We propose **RecIF-Bench** to rigorously assess the synergy between instruction following and domain-specific recommendation. It organizes 9 distinct tasks into a four-layer capability hierarchy:
43
+
44
+ * **Layer 0: Semantic Alignment** (Item Understanding)
45
+ * **Layer 1: Fundamental Prediction** (Short Video Rec, Ad Rec, Goods Rec, Label Prediction)
46
+ * **Layer 2: Instruction Following** (Interactive Rec, Label-Conditional Rec)
47
+ * **Layer 3: Reasoning** (User Summarization, Recommendation Explanation)
48
+
49
+ The benchmark aggregates data from three domains: **Short Video** (Content), **Ads** (Commercial), and **Goods** (E-commerce).
50
+
51
+ ## πŸ€– Model Zoo
52
+
53
+ The OpenOneRec-Foundation series is built upon the Qwen architecture, enhanced with **Itemic Tokens** for modality alignment and trained via a multi-stage protocol.
54
+
55
+ | Model | Backbone | Parameters | Description | Link |
56
+ | :--- | :--- | :--- | :--- | :--- |
57
+ | **OneRec-Open-1.7B** | Qwen3-1.7B | 1.7B | Standard version trained on open-source data (~100B tokens) | [HuggingFace](https://huggingface.co/Onerec) |
58
+ | **OneRec-Open-8B** | Qwen3-8B | 8B | Standard version trained on open-source data (~100B tokens) | [HuggingFace](https://huggingface.co/Onerec) |
59
+ | **OneRec-Pro-1.7B** | Qwen3-1.7B | 1.7B | Enhanced version with proprietary tokens | *Coming Soon* |
60
+ | **OneRec-Pro-8B** | Qwen3-8B | 8B | Enhanced version with proprietary tokens | *Coming Soon* |
61
+
62
+ ## πŸ—οΈ Method & Architecture
63
+
64
+ OpenOneRec reframes recommendation as a general-purpose sequence modeling paradigm.
65
+
66
+ ### 1. Items as Tokens
67
+ To bridge the modality gap, we treat items as a distinct modality using **Itemic Tokens** derived from hierarchical vector quantization. This allows the LLM to process interaction history as a cohesive context sequence.
68
+
69
+ ### 2. Training Pipeline
70
+ Our framework utilizes the following recipe:
71
+ * **Pre-Training**: Integrates collaborative signals via Itemic-Text Alignment and mixed-domain Co-Pretraining.
72
+ * **Post-Training**:
73
+ * *Stage 1*: Cold-Start SFT for basic instruction following.
74
+ * *Stage 2*: Alternating On-Policy Distillation & SFT to balance general reasoning and recommendation performance.
75
+ * *Stage 3*: Recommendation-oriented Reinforcement Learning (Rec-RL).
76
+
77
+ <div align="center">
78
+ <img src="assets/framework.png" width="90%" alt="OpenOneRec Overall Framework" />
79
+ <br>
80
+ <em>Figure: The Overall Framework of OpenOneRec.</em>
81
+ </div>
82
+
83
+ ## πŸ“ˆ Performance
84
+
85
+ ### Results on RecIF-Bench
86
+ OpenOneRec-Foundation achieves **State-of-the-Art (SOTA)** results across RecIF-Bench tasks, significantly outperforming baselines like LC-Rec and TIGER.
87
+
88
+ | Task | Metric | TIGER | LC-Rec | **OneRec-Pro-8B** |
89
+ | :--- | :--- | :--- | :--- | :--- |
90
+ | **UserDoc** | Recall@32 | 0.0132 | 0.0180 | **0.0369** |
91
+ | **Ads Rec** | Recall@32 | 0.0581 | 0.0723 | **0.0964** |
92
+ | **Goods Rec** | Recall@32 | 0.0283 | 0.0416 | **0.0538** |
93
+
94
+
95
+ ### Cross-Domain Transferability
96
+ On the **Amazon Benchmark** (10 datasets), OpenOneRec demonstrates exceptional zero-shot/few-shot transfer capabilities, achieving an average **26.8% improvement** in Recall@10 over the second-best method.
97
+
98
+ ## πŸš€ Quick Start
99
+
100
+ *Code release and detailed usage instructions are coming soon.*
101
+
102
+ Currently, you can load our models using `transformers`:
103
+
104
+ ```python
105
+ from transformers import AutoModelForCausalLM, AutoTokenizer
106
+
107
+ model_id = "Onerec/OneRec-Open-8B"
108
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
109
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
110
+
111
+ # Example inference code will be updated here
112
+ ```
113
+
114
+ ## πŸ“œ Citation
115
+ If you find our work helpful, please cite our technical report:
116
+
117
+ ```bibtex
118
+ @article{openonerec2025,
119
+ title={An Open Foundation Model and Benchmark to Accelerate Generative Recommendation},
120
+ author={OneRec Team},
121
+ journal={arXiv preprint},
122
+ year={2025}
123
+ }
124
+ ```
125
+ ## πŸ›‘οΈ License
126
+ The code in this repository is licensed under the Apache 2.0 License. The model weights are subject to their specific license agreements.
assets/framework.png ADDED

Git LFS Details

  • SHA256: 16163c1adbd5afd029125f116cc90f042a200f5301a674131b10e0603d97b393
  • Pointer size: 131 Bytes
  • Size of remote file: 207 kB