Benchmarking Label Noise in Instance Segmentation: Spatial Noise Matters
Paper
•
2406.10891
•
Published
•
1
image
imagewidth (px) 1.92k
1.92k
| label
class label 4
classes |
|---|---|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
0clean_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
1easy_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
2hard_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
|
3medium_val
|
This repository provides benchmark/noisy annotation files for VIPER used in:
Noisy Annotations in Semantic Segmentation (Kimhi et al., 2025)
Segmentation is annotation-heavy: every object requires a precise boundary. In practice, labels can be noisy due to ambiguity, annotator fatigue, imperfect tooling, and semi-automated workflows.
For sim2real datasets, additional systematic artifacts can appear (rendering/geometry quirks, domain gap), making robustness analysis especially important.
VIPER-N is designed to stress-test robustness across difficulty regimes.
benchmark/annotations/instances_train2017.jsonbenchmark/annotations/instances_val2017.jsonOpen any of:
reports/gallery/clean_val/index.htmlreports/gallery/easy_val/index.htmlreports/gallery/medium_val/index.htmlreports/gallery/hard_val/index.htmlkimhi/viper.benchmark/annotations/.Conceptually:
kimhi/viper: coco/annotations/instances_val2017.jsonbenchmark/annotations/instances_val2017.jsonIf you use this benchmark, please cite:
@misc{kimhi2025noisyannotationssemanticsegmentation,
title={Noisy Annotations in Semantic Segmentation},
author={Moshe Kimhi and Omer Kerem and Eden Grad and Ehud Rivlin and Chaim Baskin},
year={2025},
eprint={2406.10891},
}
Released under CC BY-NC 4.0 (Attribution–NonCommercial 4.0 International).