license: apache-2.0
task_categories:
- image-text-to-text
- video-text-to-text
language:
- en
tags:
- multimodal
- jigsaw
- self-supervised
- mllm
- 3d-vision
- reinforcement-learning
Visual Jigsaw Training Data
Paper | Project Page | Code
Introduction
This repository provides the training data for Visual Jigsaw, a generic self-supervised post-training framework designed to strengthen visual understanding in Multimodal Large Language Models (MLLMs). Visual Jigsaw is formulated as a general ordering task: visual inputs are partitioned, shuffled, and the model must reconstruct the visual information by producing the correct permutation in natural language. This approach naturally aligns with reinforcement learning from verifiable rewards (RLVR) and derives its supervisory signal automatically without any annotations.
This dataset facilitates the instantiation of Visual Jigsaw across various visual modalities, improving fine-grained perception, temporal reasoning, and 3D spatial understanding in MLLMs.
Dataset Details
The Visual Jigsaw training data is composed of visual inputs from established datasets, processed for the jigsaw task across three modalities. For training, users will need to download the source data from the respective original datasets.
The data is sourced from:
- Image Jigsaw Task: Uses images from the COCO 2017 training split.
- Video Jigsaw Task: Uses videos from LLaVa-Video.
- 3D Jigsaw Task: Uses RGB images from ScanNet.
Training
The training scripts for Visual Jigsaw are provided in the train_scripts\ directory of the associated GitHub repository. Please refer to the repository for detailed instructions on preparing the data and running the training.
License
This project and its associated data are released under the Apache-2.0 license. See the LICENSE file in the project's GitHub repository for full details.
Citation
If you find this dataset or the Visual Jigsaw project helpful for your research, please consider citing the original paper:
@article{visual_jigsaw,
author = {Wu, Penghao and Yushan, Zhang and Haiwen, Diao and Bo, Li and Lu, Lewei and Liu, Ziwei},
title = {Visual Jigsaw Post-Training Improves MLLMs},
journal={arXiv preprint arXiv:2509.25190},
year={2025}}