simpleVQA / README.md
JosephFace's picture
Add dataset card README.md
62ec054 verified
metadata
license: mit
task_categories:
  - visual-question-answering
  - text-to-image
language:
  - en
tags:
  - vision-language
  - vqa
  - multimodal
  - question-answering
size_categories:
  - n<1K

SimpleVQA Dataset

SimpleVQA is a simple vision-language question-answering dataset designed for testing and reproducing vision-language model training. It contains 128 samples with images and question-answer pairs in a conversational format.

Dataset Description

Dataset Summary

SimpleVQA is a lightweight dataset containing 128 vision-language question-answering samples. Each sample includes:

  • An image (512x512 RGB)
  • A conversation with user questions and assistant answers
  • Image paths for reference

This dataset is suitable for:

  • Testing vision-language model training pipelines
  • Reproducing experimental results
  • Educational purposes and quick prototyping

Supported Tasks

  • Visual Question Answering (VQA): Answer questions about image content
  • Image Description: Generate descriptions of image content
  • Multimodal Conversation: Engage in conversations about images

Languages

The dataset is primarily in English.

Dataset Structure

Data Fields

Each sample contains the following fields:

  • messages: List of conversation turns
    • role: "user" or "assistant"
    • content: Text content of the message
  • image: PIL Image object (RGB format, 512x512)
  • image_path: Original image file path

Data Splits

  • train: 128 samples

Example

from datasets import load_dataset

dataset = load_dataset("JosephFace/simpleVQA")

# Access a sample
sample = dataset["train"][0]
print(sample["messages"])
# [
#   {"role": "user", "content": "What is shown in this image?"},
#   {"role": "assistant", "content": "This is sample image 0 from the SimpleVQA dataset."}
# ]

print(sample["image"])  # PIL Image object
print(sample["image_path"])  # "images/image_00000.jpg"

Usage

Load from HuggingFace Hub

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("JosephFace/simpleVQA")

# Or load specific split
train_dataset = load_dataset("JosephFace/simpleVQA", split="train")

Load Locally

If you have the dataset files locally:

from datasets import load_from_disk

# Load from Arrow format
dataset = load_from_disk("path/to/hf_dataset")

# Or load from JSONL
from datasets import load_dataset
dataset = load_dataset("json", data_files="simpleVQA_128.jsonl", split="train")

Use with Training Pipeline

from datasets import load_dataset
from veomni.data.dataset import MappingDataset

# Load dataset
hf_dataset = load_dataset("JosephFace/simpleVQA", split="train")

# Use with VeOmni training pipeline
dataset = MappingDataset(data=hf_dataset, transform=your_transform_function)

Dataset Statistics

  • Total samples: 128
  • Image format: JPEG, 512x512 RGB
  • Average conversation turns: 2 (1 user question + 1 assistant answer)
  • Total images: 128

Limitations

  • Small dataset size (128 samples) - suitable for testing only
  • Synthetic/placeholder images - not real-world data
  • Limited question diversity
  • Primarily English language content

Citation

@dataset{josephface_simplevqa,
  title={SimpleVQA: A Simple Vision-Language Question-Answering Dataset},
  author={JosephFace},
  year={2025},
  url={https://huggingface.co/datasets/JosephFace/simpleVQA}
}

License

This dataset is released under the MIT License.