viper-n / README.md
Moshe's picture
Link HF collection
7bf7d12
metadata
license: cc-by-nc-4.0
language: en
tags:
  - computer-vision
  - instance-segmentation
  - dataset
  - benchmark
  - noisy-labels
  - sim2real
  - viper
  - coco

VIPER-N — Noisy-label benchmark for instance segmentation (COCO-format annotations)

VIPER-N provides noisy COCO instance segmentation annotations for the VIPER dataset, as introduced in:

This repo is annotations-only (no images). Pair it with kimhi/viper (VIPER images + clean annotations).

Collection (all related datasets):

What’s included

  • COCO instances JSON (same schema as COCO 2017):
    • benchmark/annotations/instances_train2017.json
    • benchmark/annotations/instances_val2017.json

Intended use

VIPER-N is meant for robust instance segmentation under label noise:

  • train/eval with the noisy annotations, or
  • compare clean vs noisy, or
  • evaluate noise-robust learning methods.

How to use (apply VIPER-N on top of VIPER)

You need the VIPER images and (optionally) clean labels from kimhi/viper.

Option A — keep a COCO-like folder layout

Assume you have:

  • VIPER images at: .../viper/images/...
  • VIPER clean labels at: .../viper/coco/annotations/instances_{train,val}2017.json

To evaluate/train with VIPER-N, simply point your dataloader to the JSONs in this repo:

  • .../viper-n/benchmark/annotations/instances_train2017.json
  • .../viper-n/benchmark/annotations/instances_val2017.json

Option B — overwrite the annotation files (quick & dirty)

Replace the clean VIPER annotation files with the VIPER-N ones while keeping filenames:

  • overwrite instances_train2017.json
  • overwrite instances_val2017.json

Loading code snippets

1) Download with huggingface_hub

from huggingface_hub import snapshot_download

viper_root = snapshot_download("kimhi/viper", repo_type="dataset")
viper_n_root = snapshot_download("kimhi/viper-n", repo_type="dataset")

images_root = f"{viper_root}/images"  # contains train/val images
ann_train = f"{viper_n_root}/benchmark/annotations/instances_train2017.json"
ann_val   = f"{viper_n_root}/benchmark/annotations/instances_val2017.json"

print(images_root)
print(ann_train)

2) Read COCO annotations with pycocotools

from pycocotools.coco import COCO

coco = COCO(ann_val)
img_ids = coco.getImgIds()[:5]
imgs = coco.loadImgs(img_ids)
print(imgs[0])

ann_ids = coco.getAnnIds(imgIds=img_ids[0])
anns = coco.loadAnns(ann_ids)
print(len(anns), anns[0].keys())

Applying the same noise recipe to other datasets

See the paper repo for scripts and recipes to generate/apply noisy labels to other COCO-format instance segmentation datasets:

(High-level idea: convert dataset → COCO instances JSON → apply noise model → export new instances_*.json.)

Dataset viewer

Hugging Face’s built-in dataset viewer does not currently render COCO instance-segmentation JSONs directly. Use the snippets above (or your training pipeline) to visualize masks.

Citation

@misc{kimhi2025noisyannotationssemanticsegmentation,
  title={Noisy Annotations in Semantic Segmentation},
  author={Moshe Kimhi and Omer Kerem and Eden Grad and Ehud Rivlin and Chaim Baskin},
  year={2025},
  eprint={2406.10891},
}

License

CC BY-NC 4.0 — Attribution–NonCommercial 4.0 International.