| --- |
| license: cc-by-4.0 |
| task_categories: |
| - visual-question-answering |
| language: |
| - en |
| pretty_name: HandVQA |
| size_categories: |
| - 1M<n<10M |
| --- |
| |
|
|
| <div align="center"> |
|
|
| # HandVQA: Diagnosing and Improving Fine-Grained Spatial Reasoning about Hands in Vision-Language Models |
|
|
| <p align="center"> |
| <strong>CVPR 2026</strong> |
| </p> |
|
|
| <p align="center"> |
| MD Khalequzzaman Chowdhury Sayem<sup>1*</sup>, |
| Mubarrat Tajoar Chowdhury<sup>1*</sup>, |
| Yihalem Yimolal Tiruneh<sup>1</sup>, |
| Muneeb A. Khan<sup>1</sup>, |
| Muhammad Salman Ali<sup>1</sup>, |
| Binod Bhattarai<sup>2,3,4†</sup>, |
| Seungryul Baek<sup>1†</sup> |
| </p> |
|
|
| <p align="center"> |
| <sup>1</sup>UNIST, |
| <sup>2</sup>University of Aberdeen, |
| <sup>3</sup>University College London, |
| <sup>4</sup>Fogsphere (Redev.AI Ltd) |
| </p> |
|
|
| <p align="center"> |
| <sup>*</sup>Equal contribution. |
| <sup>†</sup>These authors jointly supervised this work. |
| </p> |
| |
| <p align="center"> |
| <a href="https://kcsayem.github.io/handvqa/"><img alt="Project Page" src="https://img.shields.io/badge/Project-Page-3b3b3b?style=for-the-badge"></a> |
| <a href="https://arxiv.org/abs/2603.26362"> |
| <img alt="arXiv" src="https://img.shields.io/badge/arXiv-2603.26362-b31b1b?style=for-the-badge"> |
| </a> |
| <a href="https://huggingface.co/datasets/kcsayem/handvqa"><img alt="Dataset" src="https://img.shields.io/badge/HuggingFace-Dataset-fcc624?style=for-the-badge"></a> |
| <a href="https://github.com/kcsayem/handvqa"><img alt="GitHub Code" src="https://img.shields.io/badge/GitHub-Code-2f3542?style=for-the-badge"></a> |
| </p> |
| |
| </div> |
| |
| HandVQA is a large-scale diagnostic benchmark designed to evaluate fine-grained spatial reasoning in vision-language models (VLMs), focusing on articulated hand pose understanding. |
| |
| It contains over **1.6 million multiple-choice questions (MCQs)** derived from 3D hand annotations, probing joint-level relationships such as angles, distances, and relative positions. |
| |
| ## Dataset Description |
| |
| ### Motivation |
| Despite strong performance on general VQA tasks, VLMs struggle with fine-grained spatial reasoning, especially for articulated structures like human hands. |
| |
| HandVQA is designed to diagnose these limitations by evaluating: |
| - Joint angle understanding |
| - Inter-joint distances |
| - Relative spatial positions (X, Y, Z) |
| |
| ### Data Sources |
| The dataset is built from: |
| - FreiHAND |
| - InterHand2.6M |
| - FPHA |
| |
| using their 3D hand joint annotations. |
| |
| ## Task Format |
| |
| Each sample consists of: |
| - An image of a hand |
| - A multiple-choice question (MCQ) |
| - 4 candidate answers |
| - 1 correct answer |
| |
| ### Subtasks |
| HandVQA includes 5 categories: |
| 1. Angle |
| 2. Distance |
| 3. Relative Position (X-axis) |
| 4. Relative Position (Y-axis) |
| 5. Relative Position (Z-axis) |
| |
| Each question probes a specific geometric relation between hand joints. |
| |
| ## Example |
| |
| **Question:** |
| From the options below, choose the correct description. |
| |
| **Options:** |
| |
| A. The middle finger is bent completely inward at the distal interphalangeal joint. |
| |
| B. The middle finger is bent inward at the distal interphalangeal joint. |
| |
| C. The middle finger is bent slightly inward at the distal interphalangeal joint. |
| |
| D. The middle finger is straight at the distal interphalangeal joint. |
| |
| **Answer:** D |
| |
| ## Dataset Statistics |
| |
| - Total questions: ~1.6M+ |
| - Number of datasets used: 3 |
| - Categories: 5 |
| |
| ## Data Generation Pipeline |
| |
| HandVQA is generated using a deterministic pipeline: |
| |
| 1. **Pose Descriptor Extraction** |
| - Compute angles, distances, and relative positions from 3D joints |
| |
| 2. **Discretization** |
| - Convert continuous values into categories (e.g., bent, straight) |
| |
| 3. **Sentence Generation** |
| - Fill structured templates |
| |
| 4. **MCQ Formation** |
| - Generate correct + distractor answers |
| |
| |
| ## Intended Uses |
| |
| - Benchmarking spatial reasoning in VLMs |
| - Training spatially-aware multimodal models |
| - Evaluating hallucination in pose understanding |
| - Studying geometry-grounded reasoning |
| |
| ## Evaluation Metrics |
| |
| - Accuracy |
| - Mean Absolute Error (MAE) for ordinal tasks (angle, distance) |
| |
| HandVQA evaluates whether models truly understand spatial geometry rather than relying on language priors. |
| |
| ## Citation |
| ```bibtex |
| @inproceedings{sayem2026handvqa, |
| title = {HandVQA: Diagnosing and Improving Fine-Grained Spatial Reasoning about Hands in Vision-Language Models}, |
| author = {Sayem, MD Khalequzzaman Chowdhury and Chowdhury, Mubarrat Tajoar and Tiruneh, Yihalem Yimolal and Khan, Muneeb A. and Ali, Muhammad Salman and Bhattarai, Binod and Baek, Seungryul}, |
| booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, |
| year = {2026}, |
| note = {arXiv:2603.26362} |
| } |
| ``` |
| |