mlevytskyi's picture
Upload dataset
7a3badd verified
metadata
license: mit
tags:
  - image-captioning
  - navigation
  - accessibility
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: image
      dtype: image
    - name: caption
      dtype: string
  splits:
    - name: train
      num_bytes: 750364704.9
      num_examples: 5755
    - name: validation
      num_bytes: 163584979.946
      num_examples: 1233
    - name: test
      num_bytes: 144699981.706
      num_examples: 1234
  download_size: 1018324832
  dataset_size: 1058649666.5519999

Merged Navigation-Focused Image Caption Dataset

This dataset is a combination and filtered version of two publicly available image captioning datasets, specifically curated to focus on images and captions relevant to navigation and scene understanding.

Source Datasets

This dataset is derived from the following two sources:

  1. COCO Captions (jxie/coco_captions)

  2. Automatic Image Captioning for Visually Impaired (aishrules25)

Filtering Process

The source datasets were filtered as follows:

COCO Captions Filtering:

The train split of the jxie/coco_captions dataset was processed. Images were selected if their corresponding captions contained one or more navigation-related keywords. The keywords used for filtering include (but are not limited to): "sidewalk", "walkway", "path", "road", "crosswalk", "curb", "intersection", "obstacle", "stairs", "doorway", "entrance", "exit", "pedestrian", "vehicle", "car", "bus", "traffic sign", "traffic light", "bicycle lane", "bus stop", and other related terms (see NAVIGATION_KEYWORDS_COCO in coco_dataset_loader.py for the full list). To ensure a manageable and somewhat balanced subset, a maximum of 100 examples were randomly selected for each identified keyword.

Kaggle Dataset Filtering:

The "Automatic Image Captioning for Visually Impaired" dataset was filtered to include images belonging to the following predefined categories, deemed relevant for navigation and scene understanding: "Construction", "bench", "bus", "door", "food_street", "lift", "stairsup_", "stairsdown_", "tactile", "trash", "wetfloor".

Merging and Final Dataset Preparation

  1. The two filtered datasets (COCO-derived and Kaggle-derived) were loaded as Hugging Face Dataset objects.
  2. Their features were verified for compatibility (image and caption).
  3. The datasets were concatenated into a single dataset.
  4. This merged dataset was then shuffled randomly (using seed=42) to ensure a mixed distribution of samples.
  5. The shuffled dataset was then split into training, validation, and test sets (approximately 70%, 15%, and 15% respectively) using a fixed seed for reproducibility.

Dataset Structure

The final dataset consists of image-caption pairs with the following features:

  • image: A Hugging Face datasets.Image object. The images are decoded by default.
  • caption: A string (datasets.Value('string')) containing the descriptive caption for the image.

Dataset Size

  • Total number of examples: 8222 (This should be the sum of train, validation, and test examples)
  • Configuration: This dataset has a single default configuration.
  • Splits:
    • train: 5755 examples
    • validation: 1233 examples
    • test: 1234 examples

Intended Use

This dataset is primarily intended for fine-tuning image captioning models. The focus on navigation-related scenes and objects makes it potentially useful for:

  • Developing assistive technologies for visually impaired individuals.
  • Training models for robotics and autonomous navigation.
  • General scene understanding models with an emphasis on outdoor and indoor navigational cues.

Licensing Information for this Merged Dataset

Given the sources:

  • The COCO-derived portion is based on a CC BY 4.0 license.
  • The Kaggle dataset's license is "Unknown".

This merged dataset is provided for research and academic purposes. If you intend to use this dataset for commercial purposes, please carefully review the licenses of the original datasets. A conservative approach would be to consider the most restrictive license of its components or to contact the original dataset creators. It is the user's responsibility to ensure compliance with all original licenses.

Citation

If you use this dataset in your work, please consider citing the original sources:

  • COCO Dataset:
    @inproceedings{lin2014microsoft,
      title={Microsoft coco: Common objects in context},
      author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
      booktitle={Computer Vision--ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13},
      pages={740--755},
      year={2014},
      organization={Springer International Publishing}
    }
    
  • Hugging Face COCO Captions: jxie/coco_captions
  • Kaggle - Automatic Image Captioning for Visually Impaired: Aishwarya S. (2023). Automatic Image Captioning for Visually Impaired. Kaggle. Retrieved from https://www.kaggle.com/datasets/aishrules25/automatic-image-captioning-for-visually-impaired

And if you use this specific merged version, you can cite this Hugging Face Dataset repository.