SceneDiff / README.md
yuqun's picture
Update README.md
49f5202 verified
---
license: mit
viewer: false
---
<p align="center">
<h1 align="center">SceneDiff: A Benchmark and Method for Multiview Object Change Detection</h1>
<p align="center">
<a href='http://yuqunw.github.io/SceneDiff' style='padding-left: 0.5rem;'>
<img src='https://img.shields.io/badge/Project-Page-blue?style=flat&logo=Google%20chrome&logoColor=blue' alt='Project Page'></a>
<a href='https://arxiv.org/abs/2512.16908'><img src='https://img.shields.io/badge/arXiv-2512.16908-b31b1b.svg' alt='Arxiv'></a>
<a href='https://github.com/yuqunw/scene_diff' style='padding-left: 0.5rem;'>
<img src='https://img.shields.io/badge/GitHub-Code-black?style=flat&logo=github&logoColor=white' alt='Code'></a>
<a href='https://github.com/yuqunw/scenediff_annotator' style='padding-left: 0.5rem;'>
<img src='https://img.shields.io/badge/GitHub-Data%20Annotator-black?style=flat&logo=github&logoColor=white' alt='Data Annotator'></a>
</p>
</p>
This repository contains the data for the paper [SceneDiff: A Benchmark and Method for Multiview Object Change Detection](http://yuqunw.github.io/SceneDiff). We investigate the problem of identifying objects that have been changed between a pair of captures of the same scene at different times, introducing the first object-level multiview change detection benchmark and a new training-free method.
### Overview
The SceneDiff Benchmark contains **350 video sequence pairs** and **1,009 annotated objects** across two subsets:
- **Varied subset (SD-V)**: 200 sequence pairs collected in a wide variety of daily indoor and outdoor scenes
- **Kitchen subset (SD-K)**: 150 sequence pairs from the [HD-Epic dataset](https://hd-epic.github.io/) with changes that naturally occur during cooking activities
For each video pair, we record all changed objects' attributes, including object names and deformability, and annotate their full segmentation masks in all visible frames. Each object is categorized with a change status: *Added*, *Removed*, or *Moved*. Statistics for each subset:
![Dataset Statistics](media/dataset_stat.jpg)
### Dataset Download
```bash
wget https://huggingface.co/datasets/yuqun/SceneDiff/resolve/main/scenediff_bechmark.zip
unzip scenediff_bechmark.zip
```
### Dataset Structure
```
scenediff_benchmark/
β”œβ”€β”€ data/ # 350 sequence pairs
β”‚ β”œβ”€β”€ sequence_pair_1/
β”‚ β”‚ β”œβ”€β”€ original_video1.mp4 # Raw video before change
β”‚ β”‚ β”œβ”€β”€ original_video2.mp4 # Raw video after change
β”‚ β”‚ β”œβ”€β”€ video1.mp4 # Video with annotation mask (before)
β”‚ β”‚ β”œβ”€β”€ video2.mp4 # Video with annotation mask (after)
β”‚ β”‚ β”œβ”€β”€ segments.pkl # Dense segmentation masks for evaluation
β”‚ β”‚ └── metadata.json # Sequence metadata
β”‚ β”œβ”€β”€ sequence_pair_2/
β”‚ β”‚ └── ...
β”‚ └── ...
β”œβ”€β”€ splits/ # Val/Test splits
β”‚ β”œβ”€β”€ val_split.json
β”‚ └── test_split.json
└── vis/ # Visualization tools
β”œβ”€β”€ visualizer.py # Flask-based web viewer
β”œβ”€β”€ requirements.txt
└── templates/
```
### Segments.pkl Structure:
```python
segments = {
'scenetype': str, # Type of scene change
'video1_objects': {
'object_id': {
'frame_id': RLE_Mask # Run-length encoded mask
}
},
'video2_objects': {
'object_id': {
'frame_id': RLE_Mask # Run-length encoded mask
}
},
'objects': {
'object_1': {
'label': str, # Object label/name
'in_video1': bool, # Present in video 1
'in_video2': bool, # Present in video 2
'deformability': str # 'rigid' or 'deformable'
}
}
}
```
### Loading Masks
To convert RLE masks back to tensors:
```python
import torch
from pycocotools import mask as mask_utils
# Load and decode RLE mask
tensor_mask = torch.tensor(mask_utils.decode(rle_mask))
```
### Visualization
Run the command
```bash
cd vis && pip install -r requirements.txt
python vis/visualizer.py
```
Open the link `http://localhost:5002` for visualized videos.
### Evaluation
Please refer to the [code repo](https://github.com/yuqunw/scene_diff?tab=readme-ov-file#evaluation) for evaluation.