Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
1.92k
1.92k
label
class label
4 classes
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
0clean_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
1easy_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
2hard_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val
3medium_val

VIPER-N — Noisy Annotations Benchmark (Annotations Only)

This repository provides benchmark/noisy annotation files for VIPER used in:

Noisy Annotations in Semantic Segmentation (Kimhi et al., 2025)

Why noisy labels (and sim2real) matter

Segmentation is annotation-heavy: every object requires a precise boundary. In practice, labels can be noisy due to ambiguity, annotator fatigue, imperfect tooling, and semi-automated workflows.

For sim2real datasets, additional systematic artifacts can appear (rendering/geometry quirks, domain gap), making robustness analysis especially important.

VIPER-N is designed to stress-test robustness across difficulty regimes.

What’s in this repo

  • Annotations only (no images)
  • A mini annotation package (seed=1) for quick evaluation
  • Qualitative HTML galleries with examples across difficulty splits

Files

  • benchmark/annotations/instances_train2017.json
  • benchmark/annotations/instances_val2017.json

Galleries (examples)

Open any of:

  • reports/gallery/clean_val/index.html
  • reports/gallery/easy_val/index.html
  • reports/gallery/medium_val/index.html
  • reports/gallery/hard_val/index.html

How to use

  1. Download the VIPER dataset (images + clean annotations) from kimhi/viper.
  2. Download this repo.
  3. Replace the annotation JSON(s) with the ones from benchmark/annotations/.

Conceptually:

  • From kimhi/viper: coco/annotations/instances_val2017.json
  • Replace with: benchmark/annotations/instances_val2017.json

Citation

If you use this benchmark, please cite:

@misc{kimhi2025noisyannotationssemanticsegmentation,
  title={Noisy Annotations in Semantic Segmentation},
  author={Moshe Kimhi and Omer Kerem and Eden Grad and Ehud Rivlin and Chaim Baskin},
  year={2025},
  eprint={2406.10891},
}

License

Released under CC BY-NC 4.0 (Attribution–NonCommercial 4.0 International).

Downloads last month
14

Paper for Kimhi/viper-n