| | --- |
| | task_categories: |
| | - question-answering |
| | language: |
| | - en |
| | tags: |
| | - code |
| | pretty_name: PFPdatasets |
| | size_categories: |
| | - 100K<n<1M |
| | license: apache-2.0 |
| | --- |
| | |
| |
|
| | <p align="center"> |
| | <h1 align="center"><strong>Paper Folding Puzzles: A Benchmark for Evaluating Spatial Reasoning in Multimodal Large Language Models</strong></h1> |
| | </p> |
| |
|
| | <p align="center"> |
| | π <a href=""><b>Homepage</b></a>   |    |
| | π» <a href="https://github.com/hznuer/PFP_bench"><b>GitHub</b></a>   |    |
| | π€ <a href="https://huggingface.co/datasets/hznuer/PFP_datasets"><b>Hugging Face</b></a>   |
| | </p> |
| | |
| | # π Introduction |
| | Recent advancements in multimodal large language models (MLLMs) have shown remarkable progress in various reasoning tasks. However, spatial reasoning, particularly in paper folding scenarios, remains a significant challenge due to limitations in understanding geometric transformations and spatial relationships. To address this gap, we present Paper Folding Puzzles (PFP), a comprehensive benchmark designed to evaluate and enhance spatial reasoning capabilities in MLLMs. Our benchmark systematically covers five distinct task types, from basic single-step transformations to complex 3D spatial visualization, providing a rigorous framework for assessing spatial intelligence in AI systems. |
| |
|
| | # π Highlights |
| |
|
| | - **We introduce Paper Folding Puzzles (PFP), a multi-dimensional benchmark for spatial reasoning.** It systematically covers five key task typesβSingle-Step, Inverse, Multi-Step, 3D-Folding, and 2D-Unfoldingβaddressing different aspects of spatial intelligence. |
| |
|
| | - **Comprehensive scale with 153,000 carefully curated samples.** The dataset includes 150,000 training samples and 3,000 test samples, ensuring robust evaluation across all task categories. |
| |
|
| | - **Structured difficulty levels within complex tasks.** The 3D-Folding and 2D-Unfolding categories include easy and hard sub-levels, enabling granular assessment of model capabilities. |
| |
|
| | - **Standardized format for easy integration.** The dataset uses parquet format with consistent JSON structure, facilitating seamless integration with existing MLLM frameworks. |
| |
|
| | ### Dataset Structure |
| |
|
| | The structure of Paper Folding Puzzles is shown as follows: |
| |
|
| | ``` |
| | PFP_dataset/ |
| | βββ train/ |
| | β βββ Single-Step.parquet |
| | β βββ Inverse.parquet |
| | β βββ Multi-Step.parquet |
| | β βββ 3D-Folding/ |
| | β β βββ _2DTo3D_N.parquet |
| | β β βββ _2DTo3D_Y.parquet |
| | β βββ 2D-Unfolding/ |
| | β βββ _3DTo2D_N.parquet |
| | β βββ _3DTo2D_Y.parquet |
| | βββ test/ |
| | βββ Single-Step.parquet |
| | βββ Inverse.parquet |
| | βββ Multi-Step.parquet |
| | βββ 3D-Folding.parquet |
| | βββ 2D-Unfolding.parquet |
| | ``` |
| |
|
| |
|
| | ### Data Instances |
| | For each instance in the dataset, the following fields are provided: |
| | ``` json |
| | { |
| | "image": "circle_001.png", |
| | "answer": "D" |
| | } |
| | ``` |
| |
|
| | ### Data Fields |
| | - `image`: a string containing the relative path to the paper folding puzzle image (e.g., "circle_001.png") |
| | - `answer`: a string indicating the correct answer option (A, B, C, or D) |
| | |
| | # π Quick Start |
| | |
| | ## Loading the Dataset |
| | ``` python |
| | from datasets import load_dataset |
| |
|
| | # Load the entire dataset |
| | dataset = load_dataset("hznuer/PFP_datasets") |
| |
|
| | # Or load specific splits |
| | train_dataset = load_dataset("hznuer/PFP_datasets", split="train") |
| | test_dataset = load_dataset("hznuer/PFP_datasets", split="test") |
| |
|
| | # Load specific task types |
| | single_step_data = load_dataset("hznuer/PFP_datasets", "Single-Step") |
| | ``` |
| | |
| | ## Basic Usage Example |
| | ``` python |
| | # Example of processing the dataset |
| | dataset = load_dataset("hznuer/PFP_datasets", split="train") |
| |
|
| | for sample in dataset: |
| | image_path = sample["image"] |
| | correct_answer = sample["answer"] |
| | # Process your paper folding puzzle here |
| | ``` |
| | |
| | # βοΈ Citation |
| |
|
| | If you find Paper Folding Puzzles helpful, please consider giving this repo a :star: and citing: |
| |
|
| | ``` latex |
| | @inproceedings{zhou2026paperfolding, |
| | title={Paper Folding Puzzles: A Benchmark for Evaluating Spatial Reasoning in Multimodal Large Language Models}, |
| | author={Zhou, Dibin and Xu, Yantao and Huang, Zongming and Yan, Zengwei and Liu, Wenhao and Miao, Yongwei and Ren, Jianfeng and Liu, Fuchang}, |
| | booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, |
| | year={2026} |
| | } |
| | ``` |
| |
|
| | # π₯ Authors |
| |
|
| | **Dibin Zhou**, **Yantao Xu**, **Zongming Huang**, **Zengwei Yan**, **Wenhao Liu**, **Yongwei Miao**, **Jianfeng Ren**, **Fuchang Liu** |
| |
|
| | **Affiliation**: School of Information Science and Technology, Hangzhou Normal University & The Digital Port Technologies Lab, School of Computer Science, University of Nottingham Ningbo China |
| |
|
| | # π Contact |
| |
|
| | For questions or issues regarding this dataset: |
| | - Open an issue on the [GitHub repository](https://github.com/hznuer/PFP_bench) |
| | - Contact the authors through the paper correspondence |
| |
|
| | --- |
| |
|
| | **Paper Folding Puzzles: Advancing spatial reasoning evaluation for multimodal AI systems** π§ |