File size: 4,634 Bytes
bf053be 8a93328 b5bd302 4030501 b5bd302 a740843 b5bd302 8a93328 7b3fe6e 8a93328 7b3fe6e 8a93328 7b3fe6e 8a93328 7b3fe6e 8a93328 ceebb6b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | ---
license: cc-by-4.0
task_categories:
- visual-question-answering
language:
- en
pretty_name: HandVQA
size_categories:
- 1M<n<10M
---
<div align="center">
# HandVQA: Diagnosing and Improving Fine-Grained Spatial Reasoning about Hands in Vision-Language Models
<p align="center">
<strong>CVPR 2026</strong>
</p>
<p align="center">
MD Khalequzzaman Chowdhury Sayem<sup>1*</sup>,
Mubarrat Tajoar Chowdhury<sup>1*</sup>,
Yihalem Yimolal Tiruneh<sup>1</sup>,
Muneeb A. Khan<sup>1</sup>,
Muhammad Salman Ali<sup>1</sup>,
Binod Bhattarai<sup>2,3,4†</sup>,
Seungryul Baek<sup>1†</sup>
</p>
<p align="center">
<sup>1</sup>UNIST,
<sup>2</sup>University of Aberdeen,
<sup>3</sup>University College London,
<sup>4</sup>Fogsphere (Redev.AI Ltd)
</p>
<p align="center">
<sup>*</sup>Equal contribution.
<sup>†</sup>These authors jointly supervised this work.
</p>
<p align="center">
<a href="https://kcsayem.github.io/handvqa/"><img alt="Project Page" src="https://img.shields.io/badge/Project-Page-3b3b3b?style=for-the-badge"></a>
<a href="https://arxiv.org/abs/2603.26362">
<img alt="arXiv" src="https://img.shields.io/badge/arXiv-2603.26362-b31b1b?style=for-the-badge">
</a>
<a href="https://huggingface.co/datasets/kcsayem/handvqa"><img alt="Dataset" src="https://img.shields.io/badge/HuggingFace-Dataset-fcc624?style=for-the-badge"></a>
<a href="https://github.com/kcsayem/handvqa"><img alt="GitHub Code" src="https://img.shields.io/badge/GitHub-Code-2f3542?style=for-the-badge"></a>
</p>
</div>
HandVQA is a large-scale diagnostic benchmark designed to evaluate fine-grained spatial reasoning in vision-language models (VLMs), focusing on articulated hand pose understanding.
It contains over **1.6 million multiple-choice questions (MCQs)** derived from 3D hand annotations, probing joint-level relationships such as angles, distances, and relative positions.
## Dataset Description
### Motivation
Despite strong performance on general VQA tasks, VLMs struggle with fine-grained spatial reasoning, especially for articulated structures like human hands.
HandVQA is designed to diagnose these limitations by evaluating:
- Joint angle understanding
- Inter-joint distances
- Relative spatial positions (X, Y, Z)
### Data Sources
The dataset is built from:
- FreiHAND
- InterHand2.6M
- FPHA
using their 3D hand joint annotations.
## Task Format
Each sample consists of:
- An image of a hand
- A multiple-choice question (MCQ)
- 4 candidate answers
- 1 correct answer
### Subtasks
HandVQA includes 5 categories:
1. Angle
2. Distance
3. Relative Position (X-axis)
4. Relative Position (Y-axis)
5. Relative Position (Z-axis)
Each question probes a specific geometric relation between hand joints.
## Example
**Question:**
From the options below, choose the correct description.
**Options:**
A. The middle finger is bent completely inward at the distal interphalangeal joint.
B. The middle finger is bent inward at the distal interphalangeal joint.
C. The middle finger is bent slightly inward at the distal interphalangeal joint.
D. The middle finger is straight at the distal interphalangeal joint.
**Answer:** D
## Dataset Statistics
- Total questions: ~1.6M+
- Number of datasets used: 3
- Categories: 5
## Data Generation Pipeline
HandVQA is generated using a deterministic pipeline:
1. **Pose Descriptor Extraction**
- Compute angles, distances, and relative positions from 3D joints
2. **Discretization**
- Convert continuous values into categories (e.g., bent, straight)
3. **Sentence Generation**
- Fill structured templates
4. **MCQ Formation**
- Generate correct + distractor answers
## Intended Uses
- Benchmarking spatial reasoning in VLMs
- Training spatially-aware multimodal models
- Evaluating hallucination in pose understanding
- Studying geometry-grounded reasoning
## Evaluation Metrics
- Accuracy
- Mean Absolute Error (MAE) for ordinal tasks (angle, distance)
HandVQA evaluates whether models truly understand spatial geometry rather than relying on language priors.
## Citation
```bibtex
@inproceedings{sayem2026handvqa,
title = {HandVQA: Diagnosing and Improving Fine-Grained Spatial Reasoning about Hands in Vision-Language Models},
author = {Sayem, MD Khalequzzaman Chowdhury and Chowdhury, Mubarrat Tajoar and Tiruneh, Yihalem Yimolal and Khan, Muneeb A. and Ali, Muhammad Salman and Bhattarai, Binod and Baek, Seungryul},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2026},
note = {arXiv:2603.26362}
}
```
|