Datasets:
pretty_name: Chess1K VQA
license: cc-by-4.0
task_categories:
- visual-question-answering
- image-text-to-text
task_ids:
- visual-question-answering
language:
- en
multimodal:
- image
- text
size_categories:
- 1K<n<10K
♟️ Chess1K-VQA
👁️🗨️ A Chessboard Visual Question Answering Dataset for Vision–Language Models
Chess1K-VQA is a synthetic Visual Question Answering (VQA) dataset built using programmatically generated chessboard images. The dataset is designed to evaluate vision–language models (VLMs) on spatial grounding, visual perception, and basic rule-based reasoning in a controlled and fully deterministic domain.
📦 Dataset Details
🧩 Dataset Description
Chess1K-VQA consists of 1,000 rendered chessboard images generated using the open-source Python Chess library. Each image is paired with a natural-language question and a deterministic ground-truth answer derived from the underlying chess position.
The dataset supports tasks such as piece identification, square occupancy, check/checkmate detection, castling rights, turn recognition, and piece counting.
pretty_name: Chess1K VQA license: cc-by-4.0 task_categories: - visual-question-answering - image-question-answering task_ids: - visual-question-answering languages: - en multimodal: - image - text size_categories: - 1K<n<10K
👨🏫 Curated by:
Dr. B. Chandra Mohan
Sri K. Samba Siva Rao
Dr. K. Rajesh
Sri P. P. M. Prasad
Dr. T. Krishna Chaitanya🏛️ Affiliation:
Department of Electronics and Communication Engineering,
Bapatla Engineering College, India💰 Funded by: Not applicable
🤝 Shared by: ECE Dept,Bapatla Engineering College
🌐 Language(s): English
📜 License: CC-BY-4.0
🔗 Dataset Sources
- 📁 Repository: https://huggingface.co/datasets/chandrabhuma/Chess1K_VQA
- 📄 Paper: Not available
- 🧪 Demo: Not available
🛠️ Uses
✅ Direct Use
- Zero-shot and few-shot evaluation of vision–language models
- Visual–symbolic reasoning research
- Spatial grounding and coordinate understanding (e.g., “square c2”)
- Multimodal instruction-following evaluation
- Accessibility tools for visually impaired or physically challenged users
- Educational and benchmarking purposes
🚫 Out-of-Scope Use
- Chess engine training or competitive gameplay optimization
- Predicting optimal moves or winning strategies
- Real-world scene understanding
- Tasks requiring natural image diversity
🧱 Dataset Structure
The dataset is provided as a Hugging Face DatasetDict with predefined splits:
- 🟢 Train: 800 samples
- 🔵 Test: 200 samples
Each sample contains:
id— unique identifierimage— rendered chessboard imagequestion— natural-language questionanswer— ground-truth answerfen— Forsyth–Edwards Notation of the position
🏗️ Dataset Creation
🎯 Curation Rationale
The dataset was created to provide a clean, reproducible benchmark for evaluating vision–language models on structured visual reasoning tasks. Chess offers a well-defined visual layout and deterministic rules, making it ideal for controlled evaluation.
🧪 Source Data
⚙️ Data Collection and Processing
Chess positions were generated programmatically using the Python Chess library. Each position was rendered into an image using standard visualization tools. Questions and answers were automatically generated using rule-based logic.
👨💻 Who are the source data producers?
All data was synthetically generated by scripts authored by the dataset creators. No external datasets or human subjects were involved.
🏷️ Annotations
🧠 Annotation Process
Annotations were generated automatically using deterministic rules derived from the chess position. No manual annotation was performed.
👥 Who are the annotators?
Annotations were produced programmatically by the dataset generation system.
🔐 Personal and Sensitive Information
The dataset does not contain any personal, sensitive, or private information.
⚠️ Bias, Risks, and Limitations
- Limited to rendered chessboard images
- Does not represent natural image variability
- Evaluates structured reasoning, not strategic gameplay
📌 Recommendations
Users should treat this dataset as a benchmark for visual–symbolic reasoning and spatial grounding, not as a general-purpose vision dataset.
📖 Citation
If you use this dataset in your research, please cite:
@dataset{chess1k_vqa_2026,
title = {Chess1K-VQA: A Chessboard Visual Question Answering Dataset for Vision-Language Models},
author = {
Chandra Mohan, B.
and Samba Siva Rao, K.
and Rajesh, K.
and Prasad, P. P. M.
and Krishna Chaitanya, T.
},
affiliation = {Department of Electronics and Communication Engineering, Bapatla Engineering College, India},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/chandrabhuma/Chess1K_VQA}
}
