yuqun commited on
Commit
dc1ced4
Β·
verified Β·
1 Parent(s): ba16b7a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -3
README.md CHANGED
@@ -1,3 +1,107 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ <p align="center">
5
+
6
+ <h1 align="center">SceneDiff: A Benchmark and Method for Multiview Object Change Detection</h1>
7
+ <p align="center">
8
+ <a href='http://yuqunw.github.io/SceneDiff' style='padding-left: 0.5rem;'>
9
+ <img src='https://img.shields.io/badge/Project-Page-blue?style=flat&logo=Google%20chrome&logoColor=blue' alt='Project Page'></a>
10
+ <a href='https://arxiv.org/abs/2409.18964'><img src='https://img.shields.io/badge/arXiv-2409.18964-b31b1b.svg' alt='Arxiv'></a>
11
+ <a href='https://github.com/yuqunw/scene_diff' style='padding-left: 0.5rem;'>
12
+ <img src='https://img.shields.io/badge/GitHub-Code-black?style=flat&logo=github&logoColor=white' alt='Code'></a>
13
+ <a href='https://github.com/yuqunw/scenediff_annotator' style='padding-left: 0.5rem;'>
14
+ <img src='https://img.shields.io/badge/GitHub-Data%20Annotator-black?style=flat&logo=github&logoColor=white' alt='Data Annotator'></a>
15
+ </p>
16
+ </p>
17
+
18
+ This repository contains the data for the paper [SceneDiff: A Benchmark and Method for Multiview Object Change Detection](http://yuqunw.github.io/SceneDiff). We investigate the problem of identifying objects that have been changed between a pair of captures of the same scene at different times, introducing the first object-level multiview change detection benchmark and a new training-free method.
19
+
20
+ ### Overview
21
+
22
+ The SceneDiff Benchmark contains **350 video sequence pairs** and **1,009 annotated objects** across two subsets:
23
+
24
+ - **Varied subset (SD-V)**: 200 sequence pairs collected in a wide variety of daily indoor and outdoor scenes
25
+ - **Kitchen subset (SD-K)**: 150 sequence pairs from the [HD-Epic dataset](https://hd-epic.github.io/) with changes that naturally occur during cooking activities
26
+
27
+ For each video pair, we record all changed objects' attributes, including object names and deformability, and annotate their full segmentation masks in all visible frames. Each object is categorized with a change status: *Added*, *Removed*, or *Moved*. Statistics for each subset:
28
+
29
+ ![Dataset Statistics](media/dataset_stat.jpg)
30
+
31
+ ### Dataset Download
32
+ ```bash
33
+ wget https://huggingface.co/datasets/yuqun/SceneDiff/resolve/main/scenediff_bechmark.zip
34
+ unzip scenediff_bechmark.zip
35
+ ```
36
+
37
+ ### Dataset Structure
38
+
39
+ ```
40
+ scenediff_benchmark/
41
+ β”œβ”€β”€ data/ # 350 sequence pairs
42
+ β”‚ β”œβ”€β”€ sequence_pair_1/
43
+ β”‚ β”‚ β”œβ”€β”€ original_video1.mp4 # Raw video before change
44
+ β”‚ β”‚ β”œβ”€β”€ original_video2.mp4 # Raw video after change
45
+ β”‚ β”‚ β”œβ”€β”€ video1.mp4 # Video with annotation mask (before)
46
+ β”‚ β”‚ β”œβ”€β”€ video2.mp4 # Video with annotation mask (after)
47
+ β”‚ β”‚ β”œβ”€β”€ segments.pkl # Dense segmentation masks for evaluation
48
+ β”‚ β”‚ └── metadata.json # Sequence metadata
49
+ β”‚ β”œβ”€β”€ sequence_pair_2/
50
+ β”‚ β”‚ └── ...
51
+ β”‚ └── ...
52
+ β”œβ”€β”€ splits/ # Val/Test splits
53
+ β”‚ β”œβ”€β”€ val_split.json
54
+ β”‚ └── test_split.json
55
+ └── vis/ # Visualization tools
56
+ β”œβ”€β”€ visualizer.py # Flask-based web viewer
57
+ β”œβ”€β”€ requirements.txt
58
+ └── templates/
59
+ ```
60
+
61
+ ### Segments.pkl Structure:
62
+ ```python
63
+ segments = {
64
+ 'scenetype': str, # Type of scene change
65
+ 'video1_objects': {
66
+ 'object_id': {
67
+ 'frame_id': RLE_Mask # Run-length encoded mask
68
+ }
69
+ },
70
+ 'video2_objects': {
71
+ 'object_id': {
72
+ 'frame_id': RLE_Mask # Run-length encoded mask
73
+ }
74
+ },
75
+ 'objects': {
76
+ 'object_1': {
77
+ 'label': str, # Object label/name
78
+ 'in_video1': bool, # Present in video 1
79
+ 'in_video2': bool, # Present in video 2
80
+ 'deformability': str # 'rigid' or 'deformable'
81
+ }
82
+ }
83
+ }
84
+ ```
85
+
86
+ ### Loading Masks
87
+
88
+ To convert RLE masks back to tensors:
89
+
90
+ ```python
91
+ import torch
92
+ from pycocotools import mask as mask_utils
93
+
94
+ # Load and decode RLE mask
95
+ tensor_mask = torch.tensor(mask_utils.decode(rle_mask))
96
+ ```
97
+
98
+ ### Visualization
99
+ Run the command
100
+ ```bash
101
+ cd vis && pip install -r requirements.txt
102
+ python vis/visualizer.py
103
+ ```
104
+ Open the link `http://localhost:5002` for visualized videos.
105
+
106
+ ### Evaluation
107
+ Please refer to the [code repo](https://github.com/yuqunw/scene_diff?tab=readme-ov-file#evaluation) for evaluation.