Update README.md
Browse files
README.md
CHANGED
|
@@ -16,7 +16,7 @@ pipeline_tag: text-generation
|
|
| 16 |
## 📢 Note: Coming Soon!
|
| 17 |
|
| 18 |
**ROME (ROME is Obviously an Agentic ModEl)** will be officially released soon.
|
| 19 |
-
The project is currently under final review and preparation. Model weights will be made publicly available shortly.
|
| 20 |
|
| 21 |
<img src="https://rlhf.oss-cn-hangzhou.aliyuncs.com/iFLOW-ROME/performance.png" width="600"/>
|
| 22 |
|
|
@@ -27,9 +27,9 @@ The project is currently under final review and preparation. Model weights will
|
|
| 27 |
|
| 28 |
## Highlights
|
| 29 |
|
| 30 |
-
**ROME** is an open-source **agentic
|
| 31 |
|
| 32 |
-
Rather than scaling performance purely by increasing parameter count, ROME achieves
|
| 33 |
|
| 34 |
<img src="https://rlhf.oss-cn-hangzhou.aliyuncs.com/iFLOW-ROME/ALE.PNG" width="600"/>
|
| 35 |
|
|
@@ -77,7 +77,7 @@ Rather than scaling performance purely by increasing parameter count, ROME achie
|
|
| 77 |
| **Model** | **Terminal-Bench 2.0** | **SWE-bench Verified** |
|
| 78 |
| ---------------------------- | ---------------------- | ---------------------- |
|
| 79 |
| Qwen3-Coder-30B-A3B-Instruct | 13.48% | 46.33% |
|
| 80 |
-
| **ROME
|
| 81 |
| GPT-OSS-120B | 21.12% | 43.93% |
|
| 82 |
| GLM-4.5 Air (106B) | 17.30% | 56.20% |
|
| 83 |
|
|
@@ -100,7 +100,7 @@ If you find our work useful, please consider citing:
|
|
| 100 |
```bibtex
|
| 101 |
@article{rome2025ale,
|
| 102 |
title={Let It Flow: Agentic Crafting on Rock and Roll - Building the ROME Model within an Open Agentic Learning Ecosystem},
|
| 103 |
-
author={
|
| 104 |
journal={arXiv preprint arXiv:2512.24873},
|
| 105 |
year={2025}
|
| 106 |
}
|
|
|
|
| 16 |
## 📢 Note: Coming Soon!
|
| 17 |
|
| 18 |
**ROME (ROME is Obviously an Agentic ModEl)** will be officially released soon.
|
| 19 |
+
The project is currently under final review and preparation. Model weights will be made publicly available shortly. Stay tuned!
|
| 20 |
|
| 21 |
<img src="https://rlhf.oss-cn-hangzhou.aliyuncs.com/iFLOW-ROME/performance.png" width="600"/>
|
| 22 |
|
|
|
|
| 27 |
|
| 28 |
## Highlights
|
| 29 |
|
| 30 |
+
**ROME** is an open-source **agentic model** incubated within the **ALE (Agentic Learning Ecosystem)**.
|
| 31 |
|
| 32 |
+
Rather than scaling performance purely by increasing parameter count, ROME achieves parameter-scale–crossing agentic performance through full-stack infrastructure and RL algorithmic optimization.
|
| 33 |
|
| 34 |
<img src="https://rlhf.oss-cn-hangzhou.aliyuncs.com/iFLOW-ROME/ALE.PNG" width="600"/>
|
| 35 |
|
|
|
|
| 77 |
| **Model** | **Terminal-Bench 2.0** | **SWE-bench Verified** |
|
| 78 |
| ---------------------------- | ---------------------- | ---------------------- |
|
| 79 |
| Qwen3-Coder-30B-A3B-Instruct | 13.48% | 46.33% |
|
| 80 |
+
| **ROME-30B-A3B** | **24.72%** | **57.40%** |
|
| 81 |
| GPT-OSS-120B | 21.12% | 43.93% |
|
| 82 |
| GLM-4.5 Air (106B) | 17.30% | 56.20% |
|
| 83 |
|
|
|
|
| 100 |
```bibtex
|
| 101 |
@article{rome2025ale,
|
| 102 |
title={Let It Flow: Agentic Crafting on Rock and Roll - Building the ROME Model within an Open Agentic Learning Ecosystem},
|
| 103 |
+
author={Wang, Weixun and Xu, XiaoXiao and An, Wanhe and Dai, Fangwen and others},
|
| 104 |
journal={arXiv preprint arXiv:2512.24873},
|
| 105 |
year={2025}
|
| 106 |
}
|