Update dataset card: Add task categories

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +11 -27
README.md CHANGED
@@ -1,12 +1,11 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
5
  - zh
6
- base_model:
7
- - Qwen/Qwen2.5-VL-3B-Instruct
8
- - Qwen/Qwen2.5-VL-7B-Instruct
9
- pipeline_tag: image-text-to-text
10
  ---
11
 
12
  <div align='center'><h1>Patch-as-Decodable-Token: Towards Unified Multi-Modal Vision Tasks in MLLMs</h1></div>
@@ -30,25 +29,6 @@ By introducing VRTs, we achieve **semantic reasoning and object-specific visual
30
 
31
  As illustrated in Figure C, we have validated PaDT across four major visual perception and understanding tasks. In all cases, PaDT achieves **state-of-the-art** performance compared to conventional character-by-character coordinate-generation MLLMs.
32
 
33
- We hope this work will inspire further exploration in the community:
34
-
35
- - What does true multimodal reasoning look like?
36
-
37
- - How can textual and visual elements be generated together in an MLLM output sequence?
38
-
39
- - And is a purely text-based output ever sufficient for visual reasoning?
40
-
41
- <div align="center">
42
- <img src="./assets/Motivation.webp" width="900"/>
43
- <p>Figure B. Some observations on conventional character-by-character coordinate-generation MLLMs and our PaDT.</p>
44
- </div>
45
-
46
-
47
- <div align="center">
48
- <img src="./assets/TaskIntroduction.webp" width="900"/>
49
- <p>Figure C. PaDT works on four visual perception and understanding tasks.</p>
50
- </div>
51
-
52
  ## Quick Start
53
 
54
  Clone this repo, and set up the environment with a few commands.
@@ -122,7 +102,8 @@ with torch.inference_mode():
122
  # extract Visual Reference Tokens within the sequence
123
  completions, feats, labels, vrts, vrts_feats = parseVRTintoCompletion(processor, completion_ids, generate_returned_result['hidden_states'], torch.Tensor([False]))
124
 
125
- print("\ngenerate result:", completions[0])
 
126
 
127
  # decode low-level visual task results
128
  low_res_image_embeds = generate_returned_result.past_image_embeds
@@ -130,7 +111,10 @@ with torch.inference_mode():
130
  visual_pe = generate_returned_result.past_visual_pe
131
  decoded_list = model.vl_decode(feats, low_res_image_embeds, high_res_image_embeds, prompt_inputs['image_grid_thw'], visual_pe)
132
 
133
- print(f"\npred_bboxes: {decoded_list['pred_boxes']},\npred_scores: {decoded_list['pred_score'].sigmoid()}\n")
 
 
 
134
  ```
135
 
136
  ## Models
@@ -192,4 +176,4 @@ We kindly encourage citation of our work if you find it useful.
192
  primaryClass={cs.CV},
193
  url={https://arxiv.org/abs/2510.01954},
194
  }
195
- ```
 
1
  ---
 
2
  language:
3
  - en
4
  - zh
5
+ license: apache-2.0
6
+ task_categories:
7
+ - object-detection
8
+ - image-segmentation
9
  ---
10
 
11
  <div align='center'><h1>Patch-as-Decodable-Token: Towards Unified Multi-Modal Vision Tasks in MLLMs</h1></div>
 
29
 
30
  As illustrated in Figure C, we have validated PaDT across four major visual perception and understanding tasks. In all cases, PaDT achieves **state-of-the-art** performance compared to conventional character-by-character coordinate-generation MLLMs.
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  ## Quick Start
33
 
34
  Clone this repo, and set up the environment with a few commands.
 
102
  # extract Visual Reference Tokens within the sequence
103
  completions, feats, labels, vrts, vrts_feats = parseVRTintoCompletion(processor, completion_ids, generate_returned_result['hidden_states'], torch.Tensor([False]))
104
 
105
+ print("
106
+ generate result:", completions[0])
107
 
108
  # decode low-level visual task results
109
  low_res_image_embeds = generate_returned_result.past_image_embeds
 
111
  visual_pe = generate_returned_result.past_visual_pe
112
  decoded_list = model.vl_decode(feats, low_res_image_embeds, high_res_image_embeds, prompt_inputs['image_grid_thw'], visual_pe)
113
 
114
+ print(f"
115
+ pred_bboxes: {decoded_list['pred_boxes']},
116
+ pred_scores: {decoded_list['pred_score'].sigmoid()}
117
+ ")
118
  ```
119
 
120
  ## Models
 
176
  primaryClass={cs.CV},
177
  url={https://arxiv.org/abs/2510.01954},
178
  }
179
+ ```