File size: 5,403 Bytes
4c1ac7d
7a53a53
 
 
 
 
 
 
 
 
 
 
 
 
 
4c1ac7d
7a53a53
9723aa7
 
 
 
 
 
 
 
 
 
 
 
 
1fb5e17
9723aa7
 
 
 
 
 
 
 
 
 
 
 
7a53a53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9723aa7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1fb5e17
9723aa7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
---
pretty_name: Chess1K VQA
license: cc-by-4.0
task_categories:
- visual-question-answering
- image-text-to-text
task_ids:
- visual-question-answering
language:
- en
multimodal:
- image
- text
size_categories:
- 1K<n<10K
---

# ♟️ Chess1K-VQA  

![img1](https://cdn-uploads.huggingface.co/production/uploads/66479ee8589fb05cc27dd28f/PGbpYARFYMBZ0H2xS6DlW.png)

### 👁️‍🗨️ A Chessboard Visual Question Answering Dataset for Vision–Language Models

<!-- Provide a quick summary of the dataset. -->

Chess1K-VQA is a synthetic Visual Question Answering (VQA) dataset built using
programmatically generated chessboard images. The dataset is designed to
evaluate vision–language models (VLMs) on spatial grounding, visual perception,
and basic rule-based reasoning in a controlled and fully deterministic domain.

---

## 📦 Dataset Details

### 🧩 Dataset Description

Chess1K-VQA consists of 1,000 rendered chessboard images generated using the
open-source Python Chess library. Each image is paired with a natural-language
question and a deterministic ground-truth answer derived from the underlying
chess position.

The dataset supports tasks such as piece identification, square occupancy,
check/checkmate detection, castling rights, turn recognition, and piece counting.
---
pretty_name: Chess1K VQA
license: cc-by-4.0
task_categories:
- visual-question-answering
- image-question-answering
task_ids:
- visual-question-answering
languages:
- en
multimodal:
- image
- text
size_categories:
- 1K<n<10K
---

- **👨‍🏫 Curated by:**  
  Dr. B. Chandra Mohan  
  Sri K. Samba Siva Rao  
  Dr. K. Rajesh  
  Sri P. P. M. Prasad  
  Dr. T. Krishna Chaitanya  

- **🏛️ Affiliation:**  
  Department of Electronics and Communication Engineering,  
  Bapatla Engineering College, India

- **💰 Funded by:** Not applicable  
- **🤝 Shared by:** ECE Dept,Bapatla Engineering College  
- **🌐 Language(s):** English  
- **📜 License:** CC-BY-4.0

### 🔗 Dataset Sources

- **📁 Repository:** https://huggingface.co/datasets/chandrabhuma/Chess1K_VQA  
- **📄 Paper:** Not available  
- **🧪 Demo:** Not available  

---

## 🛠️ Uses

### ✅ Direct Use

- Zero-shot and few-shot evaluation of vision–language models  
- Visual–symbolic reasoning research  
- Spatial grounding and coordinate understanding (e.g., “square c2”)  
- Multimodal instruction-following evaluation  
- Accessibility tools for visually impaired or physically challenged users  
- Educational and benchmarking purposes  

### 🚫 Out-of-Scope Use

- Chess engine training or competitive gameplay optimization  
- Predicting optimal moves or winning strategies  
- Real-world scene understanding  
- Tasks requiring natural image diversity  

---

## 🧱 Dataset Structure

The dataset is provided as a Hugging Face `DatasetDict` with predefined splits:

- **🟢 Train:** 800 samples  
- **🔵 Test:** 200 samples  

Each sample contains:

- `id` — unique identifier  
- `image` — rendered chessboard image  
- `question` — natural-language question  
- `answer` — ground-truth answer  
- `fen` — Forsyth–Edwards Notation of the position  

---

## 🏗️ Dataset Creation

### 🎯 Curation Rationale

The dataset was created to provide a clean, reproducible benchmark for evaluating
vision–language models on structured visual reasoning tasks. Chess offers a
well-defined visual layout and deterministic rules, making it ideal for controlled
evaluation.

### 🧪 Source Data

#### ⚙️ Data Collection and Processing

Chess positions were generated programmatically using the Python Chess library.
Each position was rendered into an image using standard visualization tools.
Questions and answers were automatically generated using rule-based logic.

#### 👨‍💻 Who are the source data producers?

All data was synthetically generated by scripts authored by the dataset creators.
No external datasets or human subjects were involved.

---

## 🏷️ Annotations

### 🧠 Annotation Process

Annotations were generated automatically using deterministic rules derived from
the chess position. No manual annotation was performed.

### 👥 Who are the annotators?

Annotations were produced programmatically by the dataset generation system.

### 🔐 Personal and Sensitive Information

The dataset does not contain any personal, sensitive, or private information.

---

## ⚠️ Bias, Risks, and Limitations

- Limited to rendered chessboard images  
- Does not represent natural image variability  
- Evaluates structured reasoning, not strategic gameplay  

### 📌 Recommendations

Users should treat this dataset as a benchmark for visual–symbolic reasoning and
spatial grounding, not as a general-purpose vision dataset.

---

## 📖 Citation

If you use this dataset in your research, please cite:

```bibtex
@dataset{chess1k_vqa_2026,
  title        = {Chess1K-VQA: A Chessboard Visual Question Answering Dataset for Vision-Language Models},
  author       = {
    Chandra Mohan, B.
    and Samba Siva Rao, K.
    and Rajesh, K.
    and Prasad, P. P. M.
    and Krishna Chaitanya, T.
  },
  affiliation  = {Department of Electronics and Communication Engineering, Bapatla Engineering College, India},
  year         = {2026},
  publisher    = {Hugging Face},
  url          = {https://huggingface.co/datasets/chandrabhuma/Chess1K_VQA}
}