nielsr HF Staff commited on
Commit
360208a
Β·
verified Β·
1 Parent(s): 0195a7a

Improve dataset card: Add task categories and tags, update sample usage, refine metadata

Browse files

This PR improves the dataset card for the PaDT project by:
- Removing `base_model` and `pipeline_tag` from the metadata, as these are typically used for model cards or Spaces, not datasets.
- Adding `task_categories`: `object-detection`, `image-segmentation`, `image-to-text` to accurately reflect the domains the dataset supports, enhancing discoverability.
- Adding comprehensive `tags` such as `mllm`, `multimodal`, `vision-language-model`, `visual-grounding`, `referring-expression-comprehension`, `referring-image-captioning`, and `computer-vision` for improved searchability.
- Adding the official Hugging Face paper link to the paper reference section.
- Updating the `PROMPT` string in the "Quick Start" sample usage section with a more specific example from the Github README, better illustrating the dataset's use in concrete tasks.

These changes make the dataset card more informative, accurate, and easier to navigate on the Hugging Face Hub.

Files changed (1) hide show
  1. README.md +80 -10
README.md CHANGED
@@ -1,19 +1,27 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
5
  - zh
6
- base_model:
7
- - Qwen/Qwen2.5-VL-3B-Instruct
8
- - Qwen/Qwen2.5-VL-7B-Instruct
9
- pipeline_tag: image-text-to-text
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <div align='center'><h1>Patch-as-Decodable-Token: Towards Unified Multi-Modal Vision Tasks in MLLMs</h1></div>
13
 
14
  <font size=4><div align='center'>[[πŸ”— Released Code](https://github.com/Gorilla-Lab-SCUT/PaDT)]
15
  [[πŸ€— Datasets](https://huggingface.co/collections/PaDT-MLLM/padt-dataset-68e400440ffb8c8f95e5ee20)] [[πŸ€— Checkpoints](https://huggingface.co/collections/PaDT-MLLM/padt-68e3f5c22e8ecbd6d0d13d43)]</div></font>
16
- <font size=4><div align='center'>[[πŸ“„ Tech Report](https://arxiv.org/abs/2510.01954)]</div></font>
17
 
18
  <div align="center">
19
  <img src="./assets/Pipeline.webp" width="900"/>
@@ -84,7 +92,7 @@ processor = VisonTextProcessingClass(processor, model.config.vision_config.spati
84
  processor.prepare(model.model.embed_tokens.weight.shape[0])
85
 
86
  # question prompt
87
- PROMPT = "Please describe this image."
88
 
89
  # construct conversation
90
  message = [
@@ -122,7 +130,8 @@ with torch.inference_mode():
122
  # extract Visual Reference Tokens within the sequence
123
  completions, feats, labels, vrts, vrts_feats = parseVRTintoCompletion(processor, completion_ids, generate_returned_result['hidden_states'], torch.Tensor([False]))
124
 
125
- print("\ngenerate result:", completions[0])
 
126
 
127
  # decode low-level visual task results
128
  low_res_image_embeds = generate_returned_result.past_image_embeds
@@ -130,7 +139,10 @@ with torch.inference_mode():
130
  visual_pe = generate_returned_result.past_visual_pe
131
  decoded_list = model.vl_decode(feats, low_res_image_embeds, high_res_image_embeds, prompt_inputs['image_grid_thw'], visual_pe)
132
 
133
- print(f"\npred_bboxes: {decoded_list['pred_boxes']},\npred_scores: {decoded_list['pred_score'].sigmoid()}\n")
 
 
 
134
  ```
135
 
136
  ## Models
@@ -174,6 +186,64 @@ Here are some randomly selected test examples showcasing PaDT’s excellent perf
174
  <img src="./assets/TAM.webp" width="900"/>
175
  </div>
176
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
177
  ## License Agreement
178
 
179
  PaDT is licensed under Apache 2.0.
@@ -192,4 +262,4 @@ We kindly encourage citation of our work if you find it useful.
192
  primaryClass={cs.CV},
193
  url={https://arxiv.org/abs/2510.01954},
194
  }
195
- ```
 
1
  ---
 
2
  language:
3
  - en
4
  - zh
5
+ license: apache-2.0
6
+ task_categories:
7
+ - object-detection
8
+ - image-segmentation
9
+ - image-to-text
10
+ tags:
11
+ - mllm
12
+ - multimodal
13
+ - vision-language-model
14
+ - visual-grounding
15
+ - referring-expression-comprehension
16
+ - referring-image-captioning
17
+ - computer-vision
18
  ---
19
 
20
  <div align='center'><h1>Patch-as-Decodable-Token: Towards Unified Multi-Modal Vision Tasks in MLLMs</h1></div>
21
 
22
  <font size=4><div align='center'>[[πŸ”— Released Code](https://github.com/Gorilla-Lab-SCUT/PaDT)]
23
  [[πŸ€— Datasets](https://huggingface.co/collections/PaDT-MLLM/padt-dataset-68e400440ffb8c8f95e5ee20)] [[πŸ€— Checkpoints](https://huggingface.co/collections/PaDT-MLLM/padt-68e3f5c22e8ecbd6d0d13d43)]</div></font>
24
+ <font size=4><div align='center'>[[πŸ“„ Tech Report](https://arxiv.org/abs/2510.01954)] [[πŸ€— Paper](https://huggingface.co/papers/2510.01954)]</div></font>
25
 
26
  <div align="center">
27
  <img src="./assets/Pipeline.webp" width="900"/>
 
92
  processor.prepare(model.model.embed_tokens.weight.shape[0])
93
 
94
  # question prompt
95
+ PROMPT = """Please carefully check the image and detect the object this sentence describes: "The car is on the left side of the horse"."""
96
 
97
  # construct conversation
98
  message = [
 
130
  # extract Visual Reference Tokens within the sequence
131
  completions, feats, labels, vrts, vrts_feats = parseVRTintoCompletion(processor, completion_ids, generate_returned_result['hidden_states'], torch.Tensor([False]))
132
 
133
+ print("
134
+ generate result:", completions[0])
135
 
136
  # decode low-level visual task results
137
  low_res_image_embeds = generate_returned_result.past_image_embeds
 
139
  visual_pe = generate_returned_result.past_visual_pe
140
  decoded_list = model.vl_decode(feats, low_res_image_embeds, high_res_image_embeds, prompt_inputs['image_grid_thw'], visual_pe)
141
 
142
+ print(f"
143
+ pred_bboxes: {decoded_list['pred_boxes']},
144
+ pred_scores: {decoded_list['pred_score'].sigmoid()}
145
+ ")
146
  ```
147
 
148
  ## Models
 
186
  <img src="./assets/TAM.webp" width="900"/>
187
  </div>
188
 
189
+ ## Training Instruction
190
+
191
+ Download Datasets:
192
+
193
+ - [COCO](https://cocodataset.org/#home)
194
+
195
+ - RefCOCO/+/g
196
+ ```bash
197
+ wget https://web.archive.org/web/20220413011718/https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco.zip
198
+ wget https://web.archive.org/web/20220413011656/https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco+.zip
199
+ wget https://web.archive.org/web/20220413012904/https://bvisionweb1.cs.unc.edu/licheng/referit/data/refcocog.zip
200
+ ```
201
+
202
+ Unpack these datasets and place them under the following directory:
203
+
204
+ ```
205
+ PaDT/
206
+ β”œβ”€β”€ dataset/
207
+ β”‚ β”œβ”€β”€ coco/
208
+ β”‚ β”‚ β”œβ”€β”€ annotations/
209
+ β”‚ β”‚ β”œβ”€β”€ train2014/
210
+ β”‚ β”‚ β”œβ”€β”€ train2017/
211
+ β”‚ β”‚ β”œβ”€β”€ val2014/
212
+ β”‚ β”‚ └── val2017/
213
+ β”‚ └── RefCOCO/
214
+ β”‚ β”œβ”€β”€ refcoco/
215
+ β”‚ β”œβ”€β”€ refcoco+/
216
+ β”‚ └── refcocog/
217
+ ```
218
+
219
+ Preprocess the datasets:
220
+ - 1. Preprocess via our scripts. (Please first update the dataset path configuration in the preprocessing scripts)
221
+ ```bash
222
+ cd src/preprocess
223
+ python process_coco.py
224
+ python process_refcoco.py
225
+ ```
226
+ - 2. We also released the preprocessed datasets which are ready to use for training in huggingface.
227
+
228
+ | Dataset | Dataset Path | Task Type |
229
+ | - | - | -|
230
+ | COCO | [PaDT-MLLM/COCO](https://huggingface.co/datasets/PaDT-MLLM/COCO) | Open Vocabulary Detection |
231
+ | RefCOCO | [PaDT-MLLM/RefCOCO](https://huggingface.co/datasets/PaDT-MLLM/RefCOCO) | Referring Expression Comprehension/Segmentation |
232
+ | RIC | [PaDT-MLLM/ReferringImageCaptioning](https://huggingface.co/datasets/PaDT-MLLM/ReferringImageCaptioning) | Referring Image Captioning |
233
+
234
+
235
+ The training scripts in `run_scripts` are ready to execute.
236
+
237
+ For example: Train the PaDT-Pro 3B model on a single node with 8Γ—96 GB GPUs.
238
+
239
+ ```bash
240
+ bash ./run_scripts/padt_pro_3b_sft.sh
241
+ ```
242
+
243
+ ## Evaluation
244
+
245
+ We provide a simple inference example in `eval/test_demo.py`. More evaluation scripts will be added soon.
246
+
247
  ## License Agreement
248
 
249
  PaDT is licensed under Apache 2.0.
 
262
  primaryClass={cs.CV},
263
  url={https://arxiv.org/abs/2510.01954},
264
  }
265
+ ```