Update README.md
Browse files
README.md
CHANGED
|
@@ -27,7 +27,7 @@
|
|
| 27 |
**OpenOneRec** is an open-source framework designed to bridge the gap between traditional recommendation systems and Large Language Models (LLMs). While Generative Recommendation has shown promise, existing models often struggle with isolated data silos and a lack of reasoning capabilities.
|
| 28 |
|
| 29 |
To address this, we introduce a unified framework that comprises:
|
| 30 |
-
* **RecIF-Bench**: The first holistic Recommendation Instruction-Following Benchmark, containing **
|
| 31 |
* **OpenOneRec-Foundation Models**: A family of models (1.7B & 8B) built on the Qwen backbone. These models are trained on hundreds of billions of tokens, integrating collaborative signals with general semantics.
|
| 32 |
* **Full-Stack Pipeline**: We open-source our comprehensive training pipeline, including data processing, co-pretraining, and post-training, to ensure full reproducibility and facilitate scaling law research in recommendation.
|
| 33 |
|
|
@@ -39,14 +39,14 @@ To address this, we introduce a unified framework that comprises:
|
|
| 39 |
|
| 40 |
## 📊 RecIF-Bench
|
| 41 |
|
| 42 |
-
We propose **RecIF-Bench
|
| 43 |
|
| 44 |
* **Layer 0: Semantic Alignment** (Item Understanding)
|
| 45 |
-
* **Layer 1: Fundamental
|
| 46 |
* **Layer 2: Instruction Following** (Interactive Rec, Label-Conditional Rec)
|
| 47 |
-
* **Layer 3: Reasoning** (
|
| 48 |
|
| 49 |
-
The benchmark aggregates data from three domains: **Short Video** (Content), **
|
| 50 |
|
| 51 |
## 🤖 Model Zoo
|
| 52 |
|
|
@@ -85,11 +85,16 @@ Our framework utilizes the following recipe:
|
|
| 85 |
### Results on RecIF-Bench
|
| 86 |
OpenOneRec-Foundation achieves **State-of-the-Art (SOTA)** results across RecIF-Bench tasks, significantly outperforming baselines like LC-Rec and TIGER.
|
| 87 |
|
| 88 |
-
| Task | Metric | TIGER | LC-Rec | **OneRec-Pro
|
| 89 |
| :--- | :--- | :--- | :--- | :--- |
|
| 90 |
-
| **
|
| 91 |
-
| **
|
| 92 |
-
| **
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
|
| 94 |
|
| 95 |
### Cross-Domain Transferability
|
|
|
|
| 27 |
**OpenOneRec** is an open-source framework designed to bridge the gap between traditional recommendation systems and Large Language Models (LLMs). While Generative Recommendation has shown promise, existing models often struggle with isolated data silos and a lack of reasoning capabilities.
|
| 28 |
|
| 29 |
To address this, we introduce a unified framework that comprises:
|
| 30 |
+
* **RecIF-Bench**: The first holistic Recommendation Instruction-Following Benchmark, containing **120M interactions** from 200k users across heterogeneous domains (Short Video, Ad, Product).
|
| 31 |
* **OpenOneRec-Foundation Models**: A family of models (1.7B & 8B) built on the Qwen backbone. These models are trained on hundreds of billions of tokens, integrating collaborative signals with general semantics.
|
| 32 |
* **Full-Stack Pipeline**: We open-source our comprehensive training pipeline, including data processing, co-pretraining, and post-training, to ensure full reproducibility and facilitate scaling law research in recommendation.
|
| 33 |
|
|
|
|
| 39 |
|
| 40 |
## 📊 RecIF-Bench
|
| 41 |
|
| 42 |
+
We propose **RecIF-Bench**, a comprehensive benchmark designed to rigorously evaluate recommendation foundation models. It organizes 8 distinct tasks into a four-layer capability hierarchy:
|
| 43 |
|
| 44 |
* **Layer 0: Semantic Alignment** (Item Understanding)
|
| 45 |
+
* **Layer 1: Fundamental Recommendation** (Short Video Rec, Ad Rec, Product Rec, Label Prediction)
|
| 46 |
* **Layer 2: Instruction Following** (Interactive Rec, Label-Conditional Rec)
|
| 47 |
+
* **Layer 3: Reasoning** (Recommendation Explanation)
|
| 48 |
|
| 49 |
+
The benchmark aggregates data from three domains: **Short Video** (Content), **Ad** (Commercial), and **Product** (E-commerce).
|
| 50 |
|
| 51 |
## 🤖 Model Zoo
|
| 52 |
|
|
|
|
| 85 |
### Results on RecIF-Bench
|
| 86 |
OpenOneRec-Foundation achieves **State-of-the-Art (SOTA)** results across RecIF-Bench tasks, significantly outperforming baselines like LC-Rec and TIGER.
|
| 87 |
|
| 88 |
+
| Task | Metric | TIGER | LC-Rec-8B | **OneRec-8B-Pro** |
|
| 89 |
| :--- | :--- | :--- | :--- | :--- |
|
| 90 |
+
| **Short Video Rec** | Recall@32 | 0.0132 | 0.0180 | **0.0369** |
|
| 91 |
+
| **Ad Rec** | Recall@32 | 0.0581 | 0.0723 | **0.0964** |
|
| 92 |
+
| **Product Rec** | Recall@32 | 0.0283 | 0.0416 | **0.0538** |
|
| 93 |
+
| **Label-Cond. Rec** | Recall@32 | 0.0123 | 0.0170 | **0.0235** |
|
| 94 |
+
| **Label Pred.** | AUC | 0.6675 | 0.6139 | **0.6912** |
|
| 95 |
+
| **Interactive Rec** | Recall@32 | -- | 0.2394 | **0.3458** |
|
| 96 |
+
| **Item Understand.** | LLM-Judge Score| -- | 0.2517 | **0.3209** |
|
| 97 |
+
| **Rec. Explanation** | LLM-Judge Score| -- | 3.9350 | **4.0381** |
|
| 98 |
|
| 99 |
|
| 100 |
### Cross-Domain Transferability
|