KaituoFeng nielsr HF Staff commited on
Commit
f7c097e
·
verified ·
1 Parent(s): 23f84d9

Enhance dataset card: Add metadata, detailed description, and sample usage (#1)

Browse files

- Enhance dataset card: Add metadata, detailed description, and sample usage (e0bfa9b3149907feb1219611e579491b22a9ecaf)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +46 -1
README.md CHANGED
@@ -1,3 +1,48 @@
 
 
 
 
 
 
 
 
 
 
1
  This repository contains the evaluation data presented in: [OneThinker: All-in-one Reasoning Model for Image and Video](https://arxiv.org/abs/2512.03043)
2
 
3
- Code: https://github.com/tulerfeng/OneThinker
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - image-text-to-text
4
+ - video-text-to-text
5
+ - object-detection
6
+ - image-segmentation
7
+ language:
8
+ - en
9
+ ---
10
+
11
  This repository contains the evaluation data presented in: [OneThinker: All-in-one Reasoning Model for Image and Video](https://arxiv.org/abs/2512.03043)
12
 
13
+ Project Page: https://github.com/tulerfeng/OneThinker
14
+ Code: https://github.com/tulerfeng/OneThinker
15
+
16
+ ## About OneThinker
17
+
18
+ <div align="center">
19
+ <img src="https://github.com/tulerfeng/OneThinker/blob/main/assets/teaser.png?raw=true" alt="OneThinker Teaser" width="95%">
20
+ </div>
21
+
22
+ We introduce **OneThinker**, an all-in-one multimodal reasoning generalist that is **capable of thinking across a wide range of fundamental visual tasks within a single model**.
23
+
24
+ We construct the large-scale **OneThinker-600k** multi-task training corpus and build **OneThinker-SFT-340k** with high-quality CoT annotations for cold-start SFT. Moreover, we propose **EMA-GRPO**, a new RL method that **balances heterogeneous reward signals across diverse visual tasks**, via simply tracking task-wise moving averages of reward std.
25
+
26
+ OneThinker demonstrates **strong performance on 31 benchmarks across 10 fundamental vision tasks**, while showing cross-task knowledge transfer and promising zero-shot generalization toward a **unified multimodal reasoning generalist**.
27
+
28
+ All code, models, and data are fully released.
29
+
30
+ ## Dataset
31
+
32
+ Our dataset covers both image and video modalities and spans a series of fundamental visual reasoning tasks, including rule-based QA, open-ended QA, captioning, spatial grounding, temporal grounding, spatio-temporal grounding, tracking, and segmentation
33
+
34
+ <div align="center">
35
+ <img src="https://github.com/tulerfeng/OneThinker/blob/main/assets/dataset.png?raw=true" alt="OneThinker Dataset Overview" width="90%">
36
+ </div>
37
+
38
+ To enable effective SFT initialization for reasoning, we leverage a strong proprietary model, Seed1.5-VL to produce CoT annotations.
39
+
40
+ The `onethinker_rl_train.json` file is for RL training while `onethinker_sft_image.json` and `onethinker_sft_video.json` is for SFT cold start. The json files end with `_unsampled` are unsampled full set.
41
+
42
+ ## Sample Usage
43
+
44
+ For inference on a single example, you may refer to:
45
+
46
+ ```bash
47
+ python ./Evaluation/inference_single/inference.py
48
+ ```