File size: 4,456 Bytes
dc1ced4 58c965b c3c5d9b dc1ced4 49f5202 dc1ced4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
---
license: mit
viewer: false
---
<p align="center">
<h1 align="center">SceneDiff: A Benchmark and Method for Multiview Object Change Detection</h1>
<p align="center">
<a href='http://yuqunw.github.io/SceneDiff' style='padding-left: 0.5rem;'>
<img src='https://img.shields.io/badge/Project-Page-blue?style=flat&logo=Google%20chrome&logoColor=blue' alt='Project Page'></a>
<a href='https://arxiv.org/abs/2512.16908'><img src='https://img.shields.io/badge/arXiv-2512.16908-b31b1b.svg' alt='Arxiv'></a>
<a href='https://github.com/yuqunw/scene_diff' style='padding-left: 0.5rem;'>
<img src='https://img.shields.io/badge/GitHub-Code-black?style=flat&logo=github&logoColor=white' alt='Code'></a>
<a href='https://github.com/yuqunw/scenediff_annotator' style='padding-left: 0.5rem;'>
<img src='https://img.shields.io/badge/GitHub-Data%20Annotator-black?style=flat&logo=github&logoColor=white' alt='Data Annotator'></a>
</p>
</p>
This repository contains the data for the paper [SceneDiff: A Benchmark and Method for Multiview Object Change Detection](http://yuqunw.github.io/SceneDiff). We investigate the problem of identifying objects that have been changed between a pair of captures of the same scene at different times, introducing the first object-level multiview change detection benchmark and a new training-free method.
### Overview
The SceneDiff Benchmark contains **350 video sequence pairs** and **1,009 annotated objects** across two subsets:
- **Varied subset (SD-V)**: 200 sequence pairs collected in a wide variety of daily indoor and outdoor scenes
- **Kitchen subset (SD-K)**: 150 sequence pairs from the [HD-Epic dataset](https://hd-epic.github.io/) with changes that naturally occur during cooking activities
For each video pair, we record all changed objects' attributes, including object names and deformability, and annotate their full segmentation masks in all visible frames. Each object is categorized with a change status: *Added*, *Removed*, or *Moved*. Statistics for each subset:

### Dataset Download
```bash
wget https://huggingface.co/datasets/yuqun/SceneDiff/resolve/main/scenediff_bechmark.zip
unzip scenediff_bechmark.zip
```
### Dataset Structure
```
scenediff_benchmark/
βββ data/ # 350 sequence pairs
β βββ sequence_pair_1/
β β βββ original_video1.mp4 # Raw video before change
β β βββ original_video2.mp4 # Raw video after change
β β βββ video1.mp4 # Video with annotation mask (before)
β β βββ video2.mp4 # Video with annotation mask (after)
β β βββ segments.pkl # Dense segmentation masks for evaluation
β β βββ metadata.json # Sequence metadata
β βββ sequence_pair_2/
β β βββ ...
β βββ ...
βββ splits/ # Val/Test splits
β βββ val_split.json
β βββ test_split.json
βββ vis/ # Visualization tools
βββ visualizer.py # Flask-based web viewer
βββ requirements.txt
βββ templates/
```
### Segments.pkl Structure:
```python
segments = {
'scenetype': str, # Type of scene change
'video1_objects': {
'object_id': {
'frame_id': RLE_Mask # Run-length encoded mask
}
},
'video2_objects': {
'object_id': {
'frame_id': RLE_Mask # Run-length encoded mask
}
},
'objects': {
'object_1': {
'label': str, # Object label/name
'in_video1': bool, # Present in video 1
'in_video2': bool, # Present in video 2
'deformability': str # 'rigid' or 'deformable'
}
}
}
```
### Loading Masks
To convert RLE masks back to tensors:
```python
import torch
from pycocotools import mask as mask_utils
# Load and decode RLE mask
tensor_mask = torch.tensor(mask_utils.decode(rle_mask))
```
### Visualization
Run the command
```bash
cd vis && pip install -r requirements.txt
python vis/visualizer.py
```
Open the link `http://localhost:5002` for visualized videos.
### Evaluation
Please refer to the [code repo](https://github.com/yuqunw/scene_diff?tab=readme-ov-file#evaluation) for evaluation.
|