coco-karpathy-wds / README.md
undefined443's picture
Upload README.md with huggingface_hub
cfebcb4 verified
metadata
license: cc-by-4.0
language:
  - en
pretty_name: COCO-2014 Karpathy Splits (WebDataset)
task_categories:
  - image-to-text
tags:
  - webdataset
  - image-captioning
  - coco
  - karpathy-splits
size_categories:
  - 100K<n<1M
configs:
  - config_name: default
    data_files:
      - split: train
        path: train/train-*.tar
      - split: val
        path: val/val-*.tar
      - split: test
        path: test/test-*.tar

COCO-2014 WebDataset Format (Karpathy Splits)

This dataset contains the COCO-2014 images and captions converted to WebDataset (WDS) format, using the Karpathy & Li (2015) dataset split for image captioning tasks.

Overview

  • Total Samples: 123,287 images with 5 reference captions each
  • Total Size: ~19 GB
  • Format: WebDataset (.tar shards)
  • Shard Size: 1,000 samples per tar file
  • License: CC-BY 4.0
  • Language: English

Structure

COCO-2014-WDS/
├── train/          (113,287 samples, 114 shards)
│   ├── train-00000.tar
│   ├── train-00001.tar
│   └── ...
├── val/            (5,000 samples, 5 shards)
│   ├── val-00000.tar
│   └── ...
└── test/           (5,000 samples, 5 shards)
    ├── test-00000.tar
    └── ...

File Format

Each .tar file contains sample pairs:

  • {key:09d}.jpg - Original JPEG image (bytes)
  • {key:09d}.json - Metadata with captions

JSON Structure

{
  "captions": [
    "A woman wearing a net on her head cutting a cake.",
    "A woman cutting a large white sheet cake.",
    "A woman wearing a hair net cutting a large sheet cake.",
    "there is a woman that is cutting a white cake",
    "A woman marking a cake with the back of a chef's knife."
  ],
  "cocoid": 522418
}

Reading the Dataset

Using Hugging Face datasets library (Recommended)

from datasets import load_dataset

# Load training split (streaming mode)
dataset = load_dataset("undefined443/coco-karpathy-wds", split="train", streaming=True)

# Load validation split
val_dataset = load_dataset("undefined443/coco-karpathy-wds", split="val", streaming=True)

# Iterate through samples
for sample in dataset.take(5):
    image = sample['image']  # PIL.Image
    metadata = sample['json']  # dict with 'captions' and 'cocoid'
    print(f"Image size: {image.size}")
    print(f"COCO ID: {metadata['cocoid']}")
    print(f"Captions: {metadata['captions']}")

Using WebDataset library directly

import webdataset as wds
import json
from io import BytesIO
from PIL import Image

# Load from HF Hub
url = "https://huggingface.co/datasets/undefined443/coco-karpathy-wds/resolve/main/train/train-{00000..00113}.tar"

dataset = (
    wds.WebDataset(url)
    .decode("pillow", handler=wds.ignore_and_continue)
    .to_tuple("jpg", "json")
    .map_dict(jpg=lambda x: x, json=json.loads)
)

for sample in dataset:
    image = sample['jpg']
    metadata = sample['json']
    captions = metadata['captions']
    cocoid = metadata['cocoid']

Using standard tarfile library (with huggingface_hub)

import tarfile
import json
from PIL import Image
from huggingface_hub import hf_hub_download

# Download a tar shard from HF Hub
tar_path = hf_hub_download(
    repo_id="undefined443/coco-karpathy-wds",
    filename="train/train-00000.tar",
    repo_type="dataset"
)

with tarfile.open(tar_path) as tar:
    for member in tar.getmembers():
        if member.name.endswith('.jpg'):
            # Extract image
            img_file = tar.extractfile(member)
            image = Image.open(img_file).convert('RGB')

            # Extract corresponding JSON
            json_name = member.name.replace('.jpg', '.json')
            json_file = tar.extractfile(json_name)
            metadata = json.load(json_file)

            captions = metadata['captions']
            cocoid = metadata['cocoid']
            print(f"COCO ID {cocoid}: {image.size}, {len(captions)} captions")

Dataset Splits

Split Samples Source Tar Files
train 113,287 train2014 (82,783) + val2014 restval (30,504) 114
val 5,000 val2014 5
test 5,000 val2014 5
Total 123,287 - 124

Karpathy Split Details

The Karpathy split (Karpathy & Li, 2015) is the standard benchmark for image captioning:

  • Carefully designed train/val/test split to avoid data leakage
  • Used by the majority of SOTA image captioning models
  • Enables direct comparison with published benchmark results
  • Each image has exactly 5 human-written captions from different annotators

Technical Details

Image Properties

  • Format: Original JPEG (raw bytes, no preprocessing or resizing)
  • Size Range: Varies (typical: 200×150 to 640×480 pixels)
  • Color Space: RGB
  • Compression: JPEG quality as in original COCO-2014

Caption Properties

  • Language: English
  • Count per Image: 5 reference captions
  • Length Range: 8-30 words (typical)
  • Style: Natural language descriptions from crowdsourced annotators

File Structure

  • Each tar file contains exactly 1,000 samples (except the last shard)
  • Keys are globally unique 9-digit indices (000000000, 000000001, etc.)
  • Shard numbering is consistent across train/val/test splits
  • File pairs: {key}.jpg (image) and {key}.json (metadata)

Usage Notes

  • All images are in original JPEG format without any preprocessing
  • Each image has exactly 5 captions from different annotators
  • Keys are 9-digit zero-padded indices for reproducibility
  • Compatible with webdataset library for efficient distributed training
  • Can be loaded with datasets library via load_dataset() for streaming
  • Supports both streaming mode (no disk space required) and local download

References

@inproceedings{karpathy2015deep,
  title={Deep Visual-Semantic Alignments for Generating Image Descriptions},
  author={Karpathy, Andrej and Li, Fei-Fei},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2015},
  pages={3128--3137}
}

@inproceedings{lin2014microsoft,
  title={Microsoft COCO: Common Objects in Context},
  author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and others},
  booktitle={European Conference on Computer Vision (ECCV)},
  year={2014},
  pages={740--755}
}

Related Datasets

Citation

If you use this dataset, please cite both the original COCO dataset and the Karpathy split paper:

Karpathy & Li (2015) - Karpathy Splits
Lin et al. (2014) - MS COCO Dataset