Update README.md
Browse files
README.md
CHANGED
|
@@ -16,13 +16,20 @@ base_model:
|
|
| 16 |
|
| 17 |
|
| 18 |
|
| 19 |
-
# CapRL
|
|
|
|
| 20 |
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
|
|
|
| 25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
When selecting between the available CapRL models, it's essential to consider the trade-off between performance and computational cost.
|
| 27 |
This guide will help you choose the most suitable model for your specific needs:
|
| 28 |
|Model|Parameters|Strength|
|
|
@@ -35,6 +42,8 @@ Now you can try out CapRL-3B with your own imagesπ¨!
|
|
| 35 |
|
| 36 |
## π’ News
|
| 37 |
We are working on even stronger base models and upgrading our training recipe β stay tuned!
|
|
|
|
|
|
|
| 38 |
- π₯ [10/15/2025] The total downloads of the CapRL-related [models and dataset](https://huggingface.co/collections/long-xing1/caprl-68d64ac32ded31596c36e189) reached 6,000 within just 20 days!
|
| 39 |
- π [10/15/2025] We are excited to announce the release of **[CapRL-InternVL3.5-8B](https://huggingface.co/internlm/CapRL-InternVL3.5-8B)**, whose image captioning capability outperforms Qwen2.5-VL-72B!
|
| 40 |
- π [10/15/2025] Thanks [mradermacher](https://huggingface.co/mradermacher) for the valuable contribution! [CapRL-3B-GGUF](https://huggingface.co/mradermacher/CapRL-3B-GGUF) is the static quants version, and [CapRL-3B-i1-GGUF](https://huggingface.co/mradermacher/CapRL-3B-i1-GGUF) is weighted/imatrix quants version.
|
|
|
|
| 16 |
|
| 17 |
|
| 18 |
|
| 19 |
+
# CapRL
|
| 20 |
+
π<a href="https://arxiv.org/abs/2509.22647">Paper</a> | π <a href="https://github.com/InternLM/CapRL">Github</a> | π€<a href="https://huggingface.co/collections/long-xing1/caprl-68d64ac32ded31596c36e189">CapRL Collection</a> | π€<a href="https://huggingface.co/papers/2509.22647">Daily Paper</a>
|
| 21 |
|
| 22 |
+
### CapRL Series Model & Dataset
|
| 23 |
+
| Series | Models & Resources |
|
| 24 |
+
| :--- | :--- |
|
| 25 |
+
| **CapRL 2.0 Series** | [π€ CapRL-Qwen3VL-2B](https://huggingface.co/internlm/CapRL-Qwen3VL-2B) \| [π€ CapRL-Qwen3VL-4B](https://huggingface.co/internlm/CapRL-Qwen3VL-4B) |
|
| 26 |
+
| **CapRL 1.0 Series** | [π€ CapRL-Qwen2.5VL-3B](https://huggingface.co/internlm/CapRL-3B) \| [π€ CapRL-InternVL3.5-8B](https://huggingface.co/yuhangzang/CapRL-InternVL3.5-8B) \| [π CapRL-2M Dataset](https://huggingface.co/datasets/internlm/CapRL-2M) \| [π¦ CapRL-3B-GGUF](https://huggingface.co/mradermacher/CapRL-3B-GGUF) \| [π¦ CapRL-3B-i1-GGUF](https://huggingface.co/mradermacher/CapRL-3B-i1-GGUF) |
|
| 27 |
|
| 28 |
+
We are excited to release the **CapRL 2.0 series**: **CapRL-Qwen3VL-2B** and **CapRL-Qwen3VL-4B**. These models feature fewer parameters while delivering even more powerful captioning performance.
|
| 29 |
+
Notably, **CapRL-Qwen3VL-2B outperforms both CapRL-Qwen2.5VL-3B and Qwen2.5VL-72B in captioning tasks**.
|
| 30 |
+
This leap in efficiency is driven by our upgraded training recipe, which includes a more rigorous QA data filter and a significantly more diverse image dataset. We welcome everyone to try them out!
|
| 31 |
+
|
| 32 |
+
## CapRL-InternVL3.5-8B
|
| 33 |
When selecting between the available CapRL models, it's essential to consider the trade-off between performance and computational cost.
|
| 34 |
This guide will help you choose the most suitable model for your specific needs:
|
| 35 |
|Model|Parameters|Strength|
|
|
|
|
| 42 |
|
| 43 |
## π’ News
|
| 44 |
We are working on even stronger base models and upgrading our training recipe β stay tuned!
|
| 45 |
+
- π₯ [12/24/2025] We are excited to release the CapRL 2.0 series: **[CapRL-Qwen3VL-2B](https://huggingface.co/internlm/CapRL-Qwen3VL-2B)** and **[CapRL-Qwen3VL-4B](https://huggingface.co/internlm/CapRL-Qwen3VL-4B)**!
|
| 46 |
+
- π₯ [12/24/2025] The total downloads of the CapRL-related [models and dataset](https://huggingface.co/collections/long-xing1/caprl-68d64ac32ded31596c36e189) reached 17,000!
|
| 47 |
- π₯ [10/15/2025] The total downloads of the CapRL-related [models and dataset](https://huggingface.co/collections/long-xing1/caprl-68d64ac32ded31596c36e189) reached 6,000 within just 20 days!
|
| 48 |
- π [10/15/2025] We are excited to announce the release of **[CapRL-InternVL3.5-8B](https://huggingface.co/internlm/CapRL-InternVL3.5-8B)**, whose image captioning capability outperforms Qwen2.5-VL-72B!
|
| 49 |
- π [10/15/2025] Thanks [mradermacher](https://huggingface.co/mradermacher) for the valuable contribution! [CapRL-3B-GGUF](https://huggingface.co/mradermacher/CapRL-3B-GGUF) is the static quants version, and [CapRL-3B-i1-GGUF](https://huggingface.co/mradermacher/CapRL-3B-i1-GGUF) is weighted/imatrix quants version.
|