Update README.md
Browse files
README.md
CHANGED
|
@@ -40,6 +40,7 @@ To address this, we introduce a unified framework that comprises:
|
|
| 40 |
* **[2026.1.1]** π The technical report has been released.
|
| 41 |
* **[2026.1.1]** π **OneRec-Foundation** models (1.7B, 8B) are now available on Hugging Face!
|
| 42 |
* **[2026.1.1]** π **RecIF-Bench** dataset and evaluation scripts are open-sourced.
|
|
|
|
| 43 |
|
| 44 |
## π RecIF-Bench
|
| 45 |
|
|
@@ -158,6 +159,8 @@ text = tokenizer.apply_chat_template(
|
|
| 158 |
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
| 159 |
|
| 160 |
# conduct text completion
|
|
|
|
|
|
|
| 161 |
generated_ids = model.generate(
|
| 162 |
**model_inputs,
|
| 163 |
max_new_tokens=32768
|
|
|
|
| 40 |
* **[2026.1.1]** π The technical report has been released.
|
| 41 |
* **[2026.1.1]** π **OneRec-Foundation** models (1.7B, 8B) are now available on Hugging Face!
|
| 42 |
* **[2026.1.1]** π **RecIF-Bench** dataset and evaluation scripts are open-sourced.
|
| 43 |
+
* **[2026.1.5]** π‘ **OneRec-Tokenizer** is open-sourced to support SID generation for new domains.
|
| 44 |
|
| 45 |
## π RecIF-Bench
|
| 46 |
|
|
|
|
| 159 |
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
| 160 |
|
| 161 |
# conduct text completion
|
| 162 |
+
# Note: In our experience, default decoding settings may be unstable for small models.
|
| 163 |
+
# For 1.7B, we suggest: top_p=0.95, top_k=20, temperature=0.75 (during 0.6 to 0.8)
|
| 164 |
generated_ids = model.generate(
|
| 165 |
**model_inputs,
|
| 166 |
max_new_tokens=32768
|