Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
video
video
5.06
5.06
label
class label
3 classes
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
1masks
End of preview. Expand in Data Studio

PROVE-Bench: A Two-Tier Real-World Benchmark for Object Removal Evaluation

Paper Project Page Code

Overview

PROVE-Bench is the benchmark component of the PROVE (Perceptual RemOVal cohErence) evaluation framework. It provides two complementary real-world video subsets specifically designed for evaluating object removal methods:

  • PROVE-M: 80 motion-augmented paired videos with ground truth
  • PROVE-H: 100 challenging real-world videos without ground truth

Together, they address the fundamental realism–evaluability dilemma of existing benchmarks: real-world datasets lack paired references, while paired datasets are synthetic.

Dataset Description

PROVE-M: Motion-Augmented Real-World Paired Benchmark

Attribute Detail
Videos 80
Ground Truth Yes (paired target-free video)
Resolution 1080p
Frames per video 81
Format Landscape / Portrait
Camera Motion Dynamic (Ken Burns-style augmentation)

Construction pipeline:

  1. Real-world paired capture β€” For each scene, two consecutive videos are recorded using a tripod-mounted stationary camera: one with the target object, one without.
  2. Mask annotation β€” Object masks obtained using SAM3 and manually refined frame by frame.
  3. Pairwise quality control β€” Three-stage filtering (BG-PSNR ranking, mask-difference filtering, human selection) yields 80 high-quality paired cases.
  4. Motion augmentation β€” Ken Burns-style geometric transformations (cropping, scaling, translation) are applied synchronously to the input–mask–GT triplet, simulating handheld shake, push/pull zoom, and target-following motion.

Statistics:

Object Count Illumination Target Type Target Motion Small Target Reflection-related
40 single / 40 multi 40 bright / 40 low-light 60 person / 20 object 67 dynamic / 13 static 6 52

PROVE-H: Hard Real-World Benchmark (without GT)

Attribute Detail
Videos 100
Ground Truth No
Resolution 1080p
Format Landscape / Portrait
Masks SAM3-generated (no manual refinement)

Scene categories:

General Dynamic Background Textured Background Complex Reflections Crowd Fast Motion
35 15 20 14 7 9

Challenging scenarios include: flowing water, flames, rain/snow, grasslands, deserts, multiple puddle reflections, dense crowds, and fast-motion scenes.

Dataset Structure

PROVE-Bench/
β”œβ”€β”€ PROVE-M/
β”‚   β”œβ”€β”€ inputs/          # Input videos with target objects
β”‚   β”œβ”€β”€ masks/           # Per-frame binary masks (white = target)
β”‚   └── gt/             # Target-free ground-truth videos
└── PROVE-H/
    β”œβ”€β”€ inputs/          # Input videos with target objects
    └── masks/           # SAM3-generated per-frame masks

Usage

With PROVE Evaluation Code

# Clone the evaluation code
git clone https://github.com/xiaomi-research/prove.git
cd prove

# Configure dataset paths in utils/dataset.py
# Then run evaluation
python run_prove_metrics.py \
    --dataset PROVE-M \
    --result_dir /PATH/TO/YOUR_RESULTS \
    --metrics rc_s rc_t \
    --out_csv results.csv

Data Format

  • Input videos: Standard video formats (mp4)
  • Masks: Per-frame binary images where white (255) indicates the region to be removed
  • Ground truth (PROVE-M only): Target-free videos aligned frame-by-frame with inputs

Important: Your generated results must share the same filenames as the original inputs (extensions may differ).

Comparison with Existing Benchmarks

Dataset Real GT Shadows Reflections Multi-Effect Crowds Textured Fast Motion #Videos
DAVIS βœ“ βœ— βœ“ βœ“ βœ— βœ— βœ“ βœ“ 90
Movies βœ— βœ“ βœ“ βœ“ βœ— βœ— βœ— βœ“ 5
Kubric βœ— βœ“ βœ“ βœ— βœ— βœ— βœ— βœ— 5
GenProp βœ“ βœ— βœ“ βœ“ βœ— βœ— βœ— βœ— 15
ROSE-Bench βœ— βœ“ βœ“ βœ“ βœ— βœ— βœ— βœ— 60
PROVE-M (Ours) βœ“ βœ“ βœ“ βœ“ βœ“ βœ— βœ— βœ“ 80
PROVE-H (Ours) βœ“ βœ— βœ“ βœ“ βœ“ βœ“ βœ“ βœ“ 100

Note

Due to compliance requirements, the open-source data differs slightly from the data used in the paper. The evaluation results based on this version may exhibit minor numerical differences from the paper, but the overall trends remain consistent.

Citation

@article{li2026prove,
   title={PROVE: A Perceptual RemOVal cohErence Benchmark for Visual Media},
   author={Li, Fuhao and You, Shaofeng and Hu, Jiagao and Liu, Yu and Chen, Yuxuan and Wang, Zepeng and Wang, Fei and Zhou, Daiguo and Luan, Jian},
   journal={arXiv preprint arXiv:2605.14534},
   year={2026}
}

License

This dataset is released under the Apache 2.0 License.

Contact

For questions about the dataset, please open an issue on the GitHub repository.

Downloads last month
18

Collection including HigherHu/PROVE-Bench

Paper for HigherHu/PROVE-Bench