PFP_datasets / README.md
hznuer's picture
Update README.md
7380a49 verified
---
task_categories:
- question-answering
language:
- en
tags:
- code
pretty_name: PFPdatasets
size_categories:
- 100K<n<1M
license: apache-2.0
---
<p align="center">
<h1 align="center"><strong>Paper Folding Puzzles: A Benchmark for Evaluating Spatial Reasoning in Multimodal Large Language Models</strong></h1>
</p>
<p align="center">
🌐 <a href=""><b>Homepage</b></a>&nbsp&nbsp | &nbsp&nbsp
πŸ’» <a href="https://github.com/hznuer/PFP_bench"><b>GitHub</b></a>&nbsp&nbsp | &nbsp&nbsp
πŸ€— <a href="https://huggingface.co/datasets/hznuer/PFP_datasets"><b>Hugging Face</b></a>&nbsp&nbsp
</p>
# πŸ‘‹ Introduction
Recent advancements in multimodal large language models (MLLMs) have shown remarkable progress in various reasoning tasks. However, spatial reasoning, particularly in paper folding scenarios, remains a significant challenge due to limitations in understanding geometric transformations and spatial relationships. To address this gap, we present Paper Folding Puzzles (PFP), a comprehensive benchmark designed to evaluate and enhance spatial reasoning capabilities in MLLMs. Our benchmark systematically covers five distinct task types, from basic single-step transformations to complex 3D spatial visualization, providing a rigorous framework for assessing spatial intelligence in AI systems.
# πŸ“Œ Highlights
- **We introduce Paper Folding Puzzles (PFP), a multi-dimensional benchmark for spatial reasoning.** It systematically covers five key task typesβ€”Single-Step, Inverse, Multi-Step, 3D-Folding, and 2D-Unfoldingβ€”addressing different aspects of spatial intelligence.
- **Comprehensive scale with 153,000 carefully curated samples.** The dataset includes 150,000 training samples and 3,000 test samples, ensuring robust evaluation across all task categories.
- **Structured difficulty levels within complex tasks.** The 3D-Folding and 2D-Unfolding categories include easy and hard sub-levels, enabling granular assessment of model capabilities.
- **Standardized format for easy integration.** The dataset uses parquet format with consistent JSON structure, facilitating seamless integration with existing MLLM frameworks.
### Dataset Structure
The structure of Paper Folding Puzzles is shown as follows:
```
PFP_dataset/
β”œβ”€β”€ train/
β”‚ β”œβ”€β”€ Single-Step.parquet
β”‚ β”œβ”€β”€ Inverse.parquet
β”‚ β”œβ”€β”€ Multi-Step.parquet
β”‚ β”œβ”€β”€ 3D-Folding/
β”‚ β”‚ β”œβ”€β”€ _2DTo3D_N.parquet
β”‚ β”‚ └── _2DTo3D_Y.parquet
β”‚ └── 2D-Unfolding/
β”‚ β”œβ”€β”€ _3DTo2D_N.parquet
β”‚ └── _3DTo2D_Y.parquet
└── test/
β”œβ”€β”€ Single-Step.parquet
β”œβ”€β”€ Inverse.parquet
β”œβ”€β”€ Multi-Step.parquet
β”œβ”€β”€ 3D-Folding.parquet
└── 2D-Unfolding.parquet
```
### Data Instances
For each instance in the dataset, the following fields are provided:
``` json
{
"image": "circle_001.png",
"answer": "D"
}
```
### Data Fields
- `image`: a string containing the relative path to the paper folding puzzle image (e.g., "circle_001.png")
- `answer`: a string indicating the correct answer option (A, B, C, or D)
# πŸš€ Quick Start
## Loading the Dataset
``` python
from datasets import load_dataset
# Load the entire dataset
dataset = load_dataset("hznuer/PFP_datasets")
# Or load specific splits
train_dataset = load_dataset("hznuer/PFP_datasets", split="train")
test_dataset = load_dataset("hznuer/PFP_datasets", split="test")
# Load specific task types
single_step_data = load_dataset("hznuer/PFP_datasets", "Single-Step")
```
## Basic Usage Example
``` python
# Example of processing the dataset
dataset = load_dataset("hznuer/PFP_datasets", split="train")
for sample in dataset:
image_path = sample["image"]
correct_answer = sample["answer"]
# Process your paper folding puzzle here
```
# βœ’οΈ Citation
If you find Paper Folding Puzzles helpful, please consider giving this repo a :star: and citing:
``` latex
@inproceedings{zhou2026paperfolding,
title={Paper Folding Puzzles: A Benchmark for Evaluating Spatial Reasoning in Multimodal Large Language Models},
author={Zhou, Dibin and Xu, Yantao and Huang, Zongming and Yan, Zengwei and Liu, Wenhao and Miao, Yongwei and Ren, Jianfeng and Liu, Fuchang},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2026}
}
```
# πŸ‘₯ Authors
**Dibin Zhou**, **Yantao Xu**, **Zongming Huang**, **Zengwei Yan**, **Wenhao Liu**, **Yongwei Miao**, **Jianfeng Ren**, **Fuchang Liu**
**Affiliation**: School of Information Science and Technology, Hangzhou Normal University & The Digital Port Technologies Lab, School of Computer Science, University of Nottingham Ningbo China
# πŸ“ž Contact
For questions or issues regarding this dataset:
- Open an issue on the [GitHub repository](https://github.com/hznuer/PFP_bench)
- Contact the authors through the paper correspondence
---
**Paper Folding Puzzles: Advancing spatial reasoning evaluation for multimodal AI systems** πŸ§