Add dataset card for Visual Jigsaw Training Data

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - image-text-to-text
5
+ - video-text-to-text
6
+ language:
7
+ - en
8
+ tags:
9
+ - multimodal
10
+ - jigsaw
11
+ - self-supervised
12
+ - mllm
13
+ - 3d-vision
14
+ - reinforcement-learning
15
+ ---
16
+
17
+ # Visual Jigsaw Training Data
18
+
19
+ [Paper](https://huggingface.co/papers/2509.25190) | [Project Page](https://penghao-wu.github.io/visual_jigsaw/) | [Code](https://github.com/penghao-wu/visual_jigsaw)
20
+
21
+ ## Introduction
22
+ This repository provides the training data for **Visual Jigsaw**, a generic self-supervised post-training framework designed to strengthen visual understanding in Multimodal Large Language Models (MLLMs). Visual Jigsaw is formulated as a general ordering task: visual inputs are partitioned, shuffled, and the model must reconstruct the visual information by producing the correct permutation in natural language. This approach naturally aligns with reinforcement learning from verifiable rewards (RLVR) and derives its supervisory signal automatically without any annotations.
23
+
24
+ This dataset facilitates the instantiation of Visual Jigsaw across various visual modalities, improving fine-grained perception, temporal reasoning, and 3D spatial understanding in MLLMs.
25
+
26
+ ## Dataset Details
27
+ The Visual Jigsaw training data is composed of visual inputs from established datasets, processed for the jigsaw task across three modalities. For training, users will need to download the source data from the respective original datasets.
28
+
29
+ The data is sourced from:
30
+ * **Image Jigsaw Task**: Uses images from the [COCO](https://cocodataset.org/#download) 2017 training split.
31
+ * **Video Jigsaw Task**: Uses videos from [LLaVa-Video](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K).
32
+ * **3D Jigsaw Task**: Uses RGB images from [ScanNet](https://github.com/ScanNet/ScanNet).
33
+
34
+ ## Training
35
+ The training scripts for Visual Jigsaw are provided in the `train_scripts\` directory of the associated [GitHub repository](https://github.com/penghao-wu/visual_jigsaw). Please refer to the repository for detailed instructions on preparing the data and running the training.
36
+
37
+ ## License
38
+ This project and its associated data are released under the Apache-2.0 license. See the [LICENSE](https://github.com/penghao-wu/visual_jigsaw/blob/main/LICENSE) file in the project's GitHub repository for full details.
39
+
40
+ ## Citation
41
+ If you find this dataset or the Visual Jigsaw project helpful for your research, please consider citing the original paper:
42
+
43
+ ```bibtex
44
+ @article{visual_jigsaw,
45
+ author = {Wu, Penghao and Yushan, Zhang and Haiwen, Diao and Bo, Li and Lu, Lewei and Liu, Ziwei},
46
+ title = {Visual Jigsaw Post-Training Improves MLLMs},
47
+ journal={arXiv preprint arXiv:2509.25190},
48
+ year={2025}}
49
+ ```