kcsayem commited on
Commit
8a93328
·
verified ·
1 Parent(s): 5aee9a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -1
README.md CHANGED
@@ -7,4 +7,106 @@ language:
7
  pretty_name: HandVQA
8
  size_categories:
9
  - 1M<n<10M
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  pretty_name: HandVQA
8
  size_categories:
9
  - 1M<n<10M
10
+ ---
11
+
12
+
13
+ # HandVQA: Diagnosing and Improving Fine-Grained Spatial Reasoning about Hands in Vision-Language Models
14
+
15
+ HandVQA is a large-scale diagnostic benchmark designed to evaluate fine-grained spatial reasoning in vision-language models (VLMs), focusing on articulated hand pose understanding.
16
+
17
+ It contains over **1.6 million multiple-choice questions (MCQs)** derived from 3D hand annotations, probing joint-level relationships such as angles, distances, and relative positions.
18
+
19
+ ## Dataset Description
20
+
21
+ ### Motivation
22
+ Despite strong performance on general VQA tasks, VLMs struggle with fine-grained spatial reasoning, especially for articulated structures like human hands.
23
+
24
+ HandVQA is designed to diagnose these limitations by evaluating:
25
+ - Joint angle understanding
26
+ - Inter-joint distances
27
+ - Relative spatial positions (X, Y, Z)
28
+
29
+ ### Data Sources
30
+ The dataset is built from:
31
+ - FreiHAND
32
+ - InterHand2.6M
33
+ - FPHA
34
+
35
+ using their 3D hand joint annotations.
36
+
37
+ ## Task Format
38
+
39
+ Each sample consists of:
40
+ - An image of a hand
41
+ - A multiple-choice question (MCQ)
42
+ - 4 candidate answers
43
+ - 1 correct answer
44
+
45
+ ### Subtasks
46
+ HandVQA includes 5 categories:
47
+ 1. Angle
48
+ 2. Distance
49
+ 3. Relative Position (X-axis)
50
+ 4. Relative Position (Y-axis)
51
+ 5. Relative Position (Z-axis)
52
+
53
+ Each question probes a specific geometric relation between hand joints.
54
+
55
+ ## Example
56
+
57
+ **Question:**
58
+ From the options below, choose the correct description.
59
+
60
+ **Options:**
61
+ A. The middle finger is bent completely inward at the distal interphalangeal joint.
62
+ B. The middle finger is bent inward at the distal interphalangeal joint.
63
+ C. The middle finger is bent slightly inward at the distal interphalangeal joint.
64
+ D. The middle finger is straight at the distal interphalangeal joint.
65
+
66
+ **Answer:** D
67
+
68
+ ## Dataset Statistics
69
+
70
+ - Total questions: ~1.6M+
71
+ - Number of datasets used: 3
72
+ - Categories: 5
73
+
74
+ ## Data Generation Pipeline
75
+
76
+ HandVQA is generated using a deterministic pipeline:
77
+
78
+ 1. **Pose Descriptor Extraction**
79
+ - Compute angles, distances, and relative positions from 3D joints
80
+
81
+ 2. **Discretization**
82
+ - Convert continuous values into categories (e.g., bent, straight)
83
+
84
+ 3. **Sentence Generation**
85
+ - Fill structured templates
86
+
87
+ 4. **MCQ Formation**
88
+ - Generate correct + distractor answers
89
+
90
+
91
+ ## Intended Uses
92
+
93
+ - Benchmarking spatial reasoning in VLMs
94
+ - Training spatially-aware multimodal models
95
+ - Evaluating hallucination in pose understanding
96
+ - Studying geometry-grounded reasoning
97
+
98
+ ## Evaluation Metrics
99
+
100
+ - Accuracy
101
+ - Mean Absolute Error (MAE) for ordinal tasks (angle, distance)
102
+
103
+ HandVQA evaluates whether models truly understand spatial geometry rather than relying on language priors.
104
+
105
+ ## Links
106
+
107
+ - Project page: https://kcsayem.github.io/handvqa/
108
+ - Paper: coming soon.
109
+ - Code: coming soon.
110
+
111
+ ## Citation
112
+ coming soon