Update README.md
Browse files
README.md
CHANGED
|
@@ -5,7 +5,7 @@ pipeline_tag: video-text-to-text
|
|
| 5 |
|
| 6 |
**<center><span style="font-size:2em;">TinyLLaVA-Video-R1</span></center>**
|
| 7 |
|
| 8 |
-
[](https://
|
| 9 |
|
| 10 |
Here, we introduce a small-scale video reasoning model TinyLLaVA-Video-R1, based on the traceably trained model [TinyLLaVA-Video](https://github.com/ZhangXJ199/TinyLLaVA-Video). After reinforcement learning on general Video-QA datasets, the model not only significantly improves its reasoning and thinking abilities, but also exhibits the emergent characteristic of “aha moments”.
|
| 11 |
|
|
|
|
| 5 |
|
| 6 |
**<center><span style="font-size:2em;">TinyLLaVA-Video-R1</span></center>**
|
| 7 |
|
| 8 |
+
[](https://arxiv.org/abs/2504.09641)[](https://github.com/ZhangXJ199/TinyLLaVA-Video-R1)
|
| 9 |
|
| 10 |
Here, we introduce a small-scale video reasoning model TinyLLaVA-Video-R1, based on the traceably trained model [TinyLLaVA-Video](https://github.com/ZhangXJ199/TinyLLaVA-Video). After reinforcement learning on general Video-QA datasets, the model not only significantly improves its reasoning and thinking abilities, but also exhibits the emergent characteristic of “aha moments”.
|
| 11 |
|