Update README.md
Browse files
README.md
CHANGED
|
@@ -8,7 +8,7 @@ language:
|
|
| 8 |
- en
|
| 9 |
---
|
| 10 |
|
| 11 |
-
This repository contains the evaluation data presented in: [OneThinker: All-in-one Reasoning Model for Image and Video](https://arxiv.org/abs/2512.03043)
|
| 12 |
|
| 13 |
Project Page: https://github.com/tulerfeng/OneThinker
|
| 14 |
Code: https://github.com/tulerfeng/OneThinker
|
|
@@ -25,24 +25,5 @@ We construct the large-scale **OneThinker-600k** multi-task training corpus and
|
|
| 25 |
|
| 26 |
OneThinker demonstrates **strong performance on 31 benchmarks across 10 fundamental vision tasks**, while showing cross-task knowledge transfer and promising zero-shot generalization toward a **unified multimodal reasoning generalist**.
|
| 27 |
|
| 28 |
-
All code, models, and data are fully released.
|
| 29 |
|
| 30 |
-
## Dataset
|
| 31 |
|
| 32 |
-
Our dataset covers both image and video modalities and spans a series of fundamental visual reasoning tasks, including rule-based QA, open-ended QA, captioning, spatial grounding, temporal grounding, spatio-temporal grounding, tracking, and segmentation
|
| 33 |
-
|
| 34 |
-
<div align="center">
|
| 35 |
-
<img src="https://github.com/tulerfeng/OneThinker/blob/main/assets/dataset.png?raw=true" alt="OneThinker Dataset Overview" width="90%">
|
| 36 |
-
</div>
|
| 37 |
-
|
| 38 |
-
To enable effective SFT initialization for reasoning, we leverage a strong proprietary model, Seed1.5-VL to produce CoT annotations.
|
| 39 |
-
|
| 40 |
-
The `onethinker_rl_train.json` file is for RL training while `onethinker_sft_image.json` and `onethinker_sft_video.json` is for SFT cold start. The json files end with `_unsampled` are unsampled full set.
|
| 41 |
-
|
| 42 |
-
## Sample Usage
|
| 43 |
-
|
| 44 |
-
For inference on a single example, you may refer to:
|
| 45 |
-
|
| 46 |
-
```bash
|
| 47 |
-
python ./Evaluation/inference_single/inference.py
|
| 48 |
-
```
|
|
|
|
| 8 |
- en
|
| 9 |
---
|
| 10 |
|
| 11 |
+
This repository contains the **evaluation data** presented in: [OneThinker: All-in-one Reasoning Model for Image and Video](https://arxiv.org/abs/2512.03043)
|
| 12 |
|
| 13 |
Project Page: https://github.com/tulerfeng/OneThinker
|
| 14 |
Code: https://github.com/tulerfeng/OneThinker
|
|
|
|
| 25 |
|
| 26 |
OneThinker demonstrates **strong performance on 31 benchmarks across 10 fundamental vision tasks**, while showing cross-task knowledge transfer and promising zero-shot generalization toward a **unified multimodal reasoning generalist**.
|
| 27 |
|
|
|
|
| 28 |
|
|
|
|
| 29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|