kasvnmtp's picture
Upload README.md with huggingface_hub
354f200 verified
metadata
license: apache-2.0
task_categories:
  - image-to-text
  - text-to-image
language:
  - en
tags:
  - vision-language
  - image-captioning
  - multimodal
  - vlm
  - finetuning
size_categories:
  - 100K<n<1M

kasvnmtp/vlm-image-captioning-dataset

Dataset Description

This is a custom Vision Language Model (VLM) dataset for image captioning tasks. The dataset contains image-text pairs suitable for finetuning vision-language models.

Dataset Statistics

  • Total Samples: 149,997
  • Train Samples: 74,998
  • Test Samples: 74,999
  • Features: image, text, sample_id

Dataset Structure

Data Fields

  • image: PIL Image object
  • text: Caption/description text for the image
  • sample_id: Unique identifier for each sample

Data Splits

Split Samples
train 74,998
test 74,999

Usage

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("kasvnmtp/vlm-image-captioning-dataset")

# Access train and test splits
train_dataset = dataset['train']
test_dataset = dataset['test']

# Example usage
for sample in train_dataset:
    image = sample['image']
    caption = sample['text']
    sample_id = sample['sample_id']
    # Your processing code here

Use Cases

  • Vision-Language Model (VLM) finetuning
  • Image captioning model training
  • Multimodal research
  • Visual understanding tasks

Citation

If you use this dataset in your research, please cite:

@dataset{kasvnmtp/vlm_image_captioning_dataset,
  title={kasvnmtp/vlm-image-captioning-dataset},
  author={Your Name},
  year={2025},
  url={https://huggingface.co/datasets/kasvnmtp/vlm-image-captioning-dataset}
}