Kaining commited on
Commit
bd5a045
Β·
1 Parent(s): 17c24c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -4
README.md CHANGED
@@ -1,9 +1,106 @@
1
  ---
2
- license: cc-by-nc-sa-4.0
 
 
 
 
 
 
 
 
 
 
3
  language:
4
  - en
5
  pretty_name: MOSEv2
6
- size_categories:
7
- - 1K<n<10K
8
  ---
9
- # MOSEv2: A More Challenging Dataset for Video Object Segmentation in Complex Scenes
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - object-detection
5
+ tags:
6
+ - video-object-segmentation
7
+ - computer-vision
8
+ - segmentation
9
+ - video-analysis
10
+ - benchmark
11
+ size_categories:
12
+ - 1K<n<10K
13
  language:
14
  - en
15
  pretty_name: MOSEv2
16
+ arxiv: 2412.04258
 
17
  ---
18
+
19
+ # MOSEv2: A More Challenging Dataset for Video Object Segmentation in Complex Scenes
20
+
21
+ ## Dataset Summary
22
+
23
+ MOSEv2 is a comprehensive video object segmentation dataset designed to advance VOS methods under real-world conditions. It consists of **5,024 videos** and over **701,976 high-quality masks** for **10,074 objects** across **200 categories**.
24
+
25
+ 🏠 [Homepage](https://mose.video) | πŸ“„ [Paper](https://arxiv.org/abs/xxxx.xxxxx) | πŸ”— [GitHub](https://github.com/henghuiding/MOSE-api)
26
+
27
+ ## Dataset Description
28
+
29
+ Video object segmentation (VOS) aims to segment specified target objects throughout a video. Although state-of-the-art methods have achieved impressive performance (e.g., 90+% J&F) on existing benchmarks such as DAVIS and YouTube-VOS, these datasets primarily contain salient, dominant, and isolated objects, limiting their generalization to real-world scenarios. To advance VOS toward more realistic environments, coMplex video Object SEgmentation (MOSEv1) was introduced to facilitate VOS research in complex scenes. Building on the strengths and limitations of MOSEv1, we present MOSEv2, a significantly more challenging dataset designed to further advance VOS methods under real-world conditions.
30
+
31
+ MOSEv2 introduces significantly greater scene complexity compared to existing datasets, including:
32
+
33
+ - **More frequent object disappearance and reappearance**
34
+ - **Severe occlusions and crowding**
35
+ - **Smaller objects**
36
+ - **Adverse weather conditions** (rain, snow, fog)
37
+ - **Low-light scenes** (nighttime, underwater)
38
+ - **Multi-shot sequences**
39
+ - **Camouflaged objects**
40
+ - **Non-physical targets** (shadows, reflections)
41
+ - **Scenarios requiring external knowledge**
42
+
43
+ We benchmark 20 representative VOS methods under 5 different settings and observe consistent performance drops. For example, SAM2 drops from 76.4% on MOSEv1 to only 50.9% on MOSEv2. We further evaluate 9 video object tracking methods and find similar declines, demonstrating that MOSEv2 presents challenges across tasks. These results highlight that despite high accuracy on existing datasets, current VOS methods still struggle under real-world complexities.
44
+
45
+ ## Benchmark Results
46
+
47
+ We evaluated 20 representative VOS methods and observed consistent performance drops compared to simpler datasets:
48
+ - **SAM2**: 76.4% (MOSEv1) β†’ 50.9% (MOSEv2)
49
+ - Similar declines observed across 9 video object tracking methods
50
+
51
+ ## Dataset Structure
52
+
53
+ ```
54
+ <train/valid.tar.gz>
55
+ β”‚
56
+ β”œβ”€β”€ Annotations
57
+ β”‚ β”‚
58
+ β”‚ β”œβ”€β”€ <video_name_1>
59
+ β”‚ β”‚ β”œβ”€β”€ 00000.png
60
+ β”‚ β”‚ β”œβ”€β”€ 00001.png
61
+ β”‚ β”‚ └── ...
62
+ β”‚ β”‚
63
+ β”‚ β”œβ”€β”€ <video_name_2>
64
+ β”‚ β”‚ β”œβ”€β”€ 00000.png
65
+ β”‚ β”‚ β”œβ”€β”€ 00001.png
66
+ β”‚ β”‚ └── ...
67
+ β”‚ β”‚
68
+ β”‚ β”œβ”€β”€ <video_name_...>
69
+ β”‚
70
+ └── JPEGImages
71
+ β”‚
72
+ β”œβ”€β”€ <video_name_1>
73
+ β”‚ β”œβ”€β”€ 00000.jpg
74
+ β”‚ β”œβ”€β”€ 00001.jpg
75
+ β”‚ └── ...
76
+ β”‚
77
+ β”œβ”€β”€ <video_name_2>
78
+ β”‚ β”œβ”€β”€ 00000.jpg
79
+ β”‚ β”œβ”€β”€ 00001.jpg
80
+ β”‚ └── ...
81
+ β”‚
82
+ └── <video_name_...>
83
+
84
+ ```
85
+
86
+ ## BibTeX
87
+
88
+ ```
89
+ @article{MOSEv2,
90
+ title={{MOSEv2}: A More Challenging Dataset for Video Object Segmentation in Complex Scenes},
91
+ author={Ding, Henghui and Ying, Kaining and Liu, Chang and He, Shuting and Jiang, Xudong and Jiang, Yu-Gang and Torr, Philip HS and Bai, Song},
92
+ journal={arXiv preprint arXiv:2508.05630},
93
+ year={2025}
94
+ }
95
+ ```
96
+ ```
97
+ @inproceedings{MOSE,
98
+ title={{MOSE}: A New Dataset for Video Object Segmentation in Complex Scenes},
99
+ author={Ding, Henghui and Liu, Chang and He, Shuting and Jiang, Xudong and Torr, Philip HS and Bai, Song},
100
+ booktitle={ICCV},
101
+ year={2023}
102
+ }
103
+ ```
104
+ ## License
105
+
106
+ MOSEv2 is licensed under a [CC BY-NC-SA 4.0 License](https://creativecommons.org/licenses/by-nc-sa/4.0/). The data of MOSE is released for non-commercial research purpose only.