license: mit
tags:
- image-captioning
- navigation
- accessibility
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 750364704.9
num_examples: 5755
- name: validation
num_bytes: 163584979.946
num_examples: 1233
- name: test
num_bytes: 144699981.706
num_examples: 1234
download_size: 1018324832
dataset_size: 1058649666.5519999
Merged Navigation-Focused Image Caption Dataset
This dataset is a combination and filtered version of two publicly available image captioning datasets, specifically curated to focus on images and captions relevant to navigation and scene understanding.
Source Datasets
This dataset is derived from the following two sources:
COCO Captions (jxie/coco_captions)
- Original Hugging Face Hub ID:
jxie/coco_captions - Link: https://huggingface.co/datasets/jxie/coco_captions
- Original Authors: The COCO dataset was created by a consortium including Microsoft. The
jxie/coco_captionsversion is provided by userjxieon Hugging Face. - Original License: The COCO dataset is generally available under a Creative Commons Attribution 4.0 International License. The
jxie/coco_captionsdataset on Hugging Face is assumed to follow COCO's terms.
- Original Hugging Face Hub ID:
Automatic Image Captioning for Visually Impaired (aishrules25)
- Original Kaggle Link: https://www.kaggle.com/datasets/aishrules25/automatic-image-captioning-for-visually-impaired/data
- Original Author: Aishwarya S
- Original License: The license for this dataset on Kaggle is listed as "Unknown". Users should verify the specific terms of use on the Kaggle dataset page if further distribution or specific uses are planned. For the purpose of this merged dataset, it's used under the assumption of fair use for research/academic purposes.
Filtering Process
The source datasets were filtered as follows:
COCO Captions Filtering:
The train split of the jxie/coco_captions dataset was processed. Images were selected if their corresponding captions contained one or more navigation-related keywords.
The keywords used for filtering include (but are not limited to):
"sidewalk", "walkway", "path", "road", "crosswalk", "curb", "intersection", "obstacle", "stairs", "doorway", "entrance", "exit", "pedestrian", "vehicle", "car", "bus", "traffic sign", "traffic light", "bicycle lane", "bus stop", and other related terms (see NAVIGATION_KEYWORDS_COCO in coco_dataset_loader.py for the full list).
To ensure a manageable and somewhat balanced subset, a maximum of 100 examples were randomly selected for each identified keyword.
Kaggle Dataset Filtering:
The "Automatic Image Captioning for Visually Impaired" dataset was filtered to include images belonging to the following predefined categories, deemed relevant for navigation and scene understanding:
"Construction", "bench", "bus", "door", "food_street", "lift", "stairsup_", "stairsdown_", "tactile", "trash", "wetfloor".
Merging and Final Dataset Preparation
- The two filtered datasets (COCO-derived and Kaggle-derived) were loaded as Hugging Face
Datasetobjects. - Their features were verified for compatibility (
imageandcaption). - The datasets were concatenated into a single dataset.
- This merged dataset was then shuffled randomly (using
seed=42) to ensure a mixed distribution of samples. - The shuffled dataset was then split into training, validation, and test sets (approximately 70%, 15%, and 15% respectively) using a fixed seed for reproducibility.
Dataset Structure
The final dataset consists of image-caption pairs with the following features:
image: A Hugging Facedatasets.Imageobject. The images are decoded by default.caption: A string (datasets.Value('string')) containing the descriptive caption for the image.
Dataset Size
- Total number of examples: 8222 (This should be the sum of train, validation, and test examples)
- Configuration: This dataset has a single default configuration.
- Splits:
train: 5755 examplesvalidation: 1233 examplestest: 1234 examples
Intended Use
This dataset is primarily intended for fine-tuning image captioning models. The focus on navigation-related scenes and objects makes it potentially useful for:
- Developing assistive technologies for visually impaired individuals.
- Training models for robotics and autonomous navigation.
- General scene understanding models with an emphasis on outdoor and indoor navigational cues.
Licensing Information for this Merged Dataset
Given the sources:
- The COCO-derived portion is based on a CC BY 4.0 license.
- The Kaggle dataset's license is "Unknown".
This merged dataset is provided for research and academic purposes. If you intend to use this dataset for commercial purposes, please carefully review the licenses of the original datasets. A conservative approach would be to consider the most restrictive license of its components or to contact the original dataset creators. It is the user's responsibility to ensure compliance with all original licenses.
Citation
If you use this dataset in your work, please consider citing the original sources:
- COCO Dataset:
@inproceedings{lin2014microsoft, title={Microsoft coco: Common objects in context}, author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence}, booktitle={Computer Vision--ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13}, pages={740--755}, year={2014}, organization={Springer International Publishing} } - Hugging Face COCO Captions:
jxie/coco_captions - Kaggle - Automatic Image Captioning for Visually Impaired: Aishwarya S. (2023). Automatic Image Captioning for Visually Impaired. Kaggle. Retrieved from https://www.kaggle.com/datasets/aishrules25/automatic-image-captioning-for-visually-impaired
And if you use this specific merged version, you can cite this Hugging Face Dataset repository.