MeViSv2 / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add metadata, update paper link, remove abstract
46661a3 verified
|
raw
history blame
10.1 kB
---
task_categories:
- image-segmentation
- object-detection
- video-text-to-text
license: cc-by-nc-sa-4.0
---
# MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation
**[\ud83c\udfe0[Project page]](https://henghuiding.github.io/MeViS/)**  **[\ud83d\udcc4[Paper]](https://huggingface.co/papers/2512.10945)**   **[\ud83d\udcc4[arXiv]](https://arxiv.org/abs/2308.08544)**   **[\ud83d\udcbe[Evaluation Server v1 (legacy)]](https://www.codabench.org/competitions/11420/)**  **[\ud83d\udd25[Evaluation Server v2]](https://www.codabench.org/competitions/11420/)**
This repository contains code for **ICCV2023** and **TPAMI 2025** paper:
> [MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation](https://huggingface.co/papers/2512.10945)
> Henghui Ding, Chang Liu, Shuting He, Kaining Ying, Xudong Jiang, Chen Change Loy, Yu-Gang Jiang
> TPAMI 2025
> [MeViS: A Large-scale Benchmark for Video Segmentation with Motion Expressions](https://arxiv.org/abs/2308.08544)
> Henghui Ding, Chang Liu, Shuting He, Xudong Jiang, Chen Change Loy
> ICCV 2023
<table border=1 frame=void>
<tr>
<td><img src="images/bird.gif" width="245"></td>
<td><img src="images/Cat.gif" width="245"></td>
<td><img src="images/coin.gif" width="245"></td>
</tr>
</table>
![teaser](images/teaser.png)
<p style="text-align:justify; text-justify:inter-ideograph;width:100%">Figure 1. Examples from <b>M</b>otion <b>e</b>xpressions <b>Vi</b>deo <b>S</b>egmentation (<b>MeViS</b>) showing the dataset’s nature and complexity. The selected target objects are masked in <font color="#FF6403">orange β–‡</font>. The expressions in MeViS primarily focus on motion attributes, making it impossible to identify the target object from a single frame. For example, the first example has three parrots with similar appearances, and the target object is identified as β€œ<i>The bird flying away</i>”. This object can only be recognized by capturing its motion throughout the video. The updated MeViS 2024 further provides motion-reasoning and no-target expressions, adds audio expressions alongside text, and provides mask and bounding box trajectory annotations.</p>
<table border="0.6">
<div align="center">
<caption><b>TABLE 1. Scale comparison between MeViS and existing language-guided video segmentation datasets.
</div>
<tbody>
<tr>
<th align="right" bgcolor="BBBBBB">Dataset</th>
<th align="center" bgcolor="BBBBBB">Pub.&Year</th>
<th align="center" bgcolor="BBBBBB">Videos</th>
<th align="center" bgcolor="BBBBBB">Object</th>
<th align="center" bgcolor="BBBBBB">Expression</th>
<th align="center" bgcolor="BBBBBB">Mask</th>
<th align="center" bgcolor="BBBBBB">Obj/Video</th>
<th align="center" bgcolor="BBBBBB">Obj/Expn</th>
<th align="center" bgcolor="BBBBBB">Target</th>
<th align="center" bgcolor="BBBBBB">Multi-target</th>
<th align="center" bgcolor="BBBBBB">No-target</th>
<th align="center" bgcolor="BBBBBB">Audio</th>
</tr>
<tr>
<td align="right"><a href="https://kgavrilyuk.github.io/publication/actor_action/" target="_blank">A2D&nbsp;Sentence</a></td>
<td align="center">CVPR&nbsp;2018</td>
<td align="center">3,782</td>
<td align="center">4,825</td>
<td align="center">6,656</td>
<td align="center">58k</td>
<td align="center">1.28</td>
<td align="center">1</td>
<td align="center">Actor</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td align="right" bgcolor="ECECEC"><a href="https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/research/video-segmentation/video-object-segmentation-with-language-referring-expressions" target="_blank">DAVIS17-RVOS</a></td>
<td align="center" bgcolor="ECECEC">ACCV&nbsp;2018</td>
<td align="center" bgcolor="ECECEC">90</td>
<td align="center" bgcolor="ECECEC">205</td>
<td align="center" bgcolor="ECECEC">205</td>
<td align="center" bgcolor="ECECEC">13.5k</td>
<td align="center" bgcolor="ECECEC">2.27</td>
<td align="center" bgcolor="ECECEC">1</td>
<td align="center" bgcolor="ECECEC">Object</td>
<td align="center" bgcolor="ECECEC">-</td>
<td align="center" bgcolor="ECECEC">-</td>
<td align="center" bgcolor="ECECEC">-</td>
</tr>
<tr>
<td align="right"><a href="https://youtube-vos.org/dataset/rvos/" target="_blank">ReferYoutubeVOS</a></td>
<td align="center">ECCV&nbsp;2020</td>
<td align="center">3,978</td>
<td align="center">7,451</td>
<td align="center">15,009</td>
<td align="center">131k</td>
<td align="center">1.86</td>
<td align="center">1</td>
<td align="center">Object</td>
<td align="center">-</td>
<td align="center">-</td>
<td align="center">-</td>
</tr>
<tr>
<td align="right" bgcolor="E5E5E5"><b>MeViS 2023</b></td>
<td align="center" bgcolor="E5E5E5"><b>ICCV&nbsp;2023</b></td>
<td align="center" bgcolor="E5E5E5"><b>2,006</b></td>
<td align="center" bgcolor="E5E5E5"><b>8,171</b></td>
<td align="center" bgcolor="E5E5E5"><b>28,570</b></td>
<td align="center" bgcolor="E5E5E5"><b>443k</b></td>
<td align="center" bgcolor="E5E5E5"><b>4.28</b></td>
<td align="center" bgcolor="E5E5E5"><b>1.59</b></td>
<td align="center" bgcolor="E5E5E5"><b>Object(s)</b></td>
<td align="center" bgcolor="E5E5E5">7,539</td>
<td align="center" bgcolor="E5E5E5">-</td>
<td align="center" bgcolor="E5E5E5">-</td>
</tr>
<tr>
<td align="right"><b>MeViS 2024</b></td>
<td align="center"><b>TPAMI</b></td>
<td align="center"><b>2,006</b></td>
<td align="center"><b>8,171</b></td>
<td align="center"><b>33,072</b></td>
<td align="center"><b>443k</b></td>
<td align="center"><b>4.28</b></td>
<td align="center"><b>1.58</b></td>
<td align="center"><b>Object(s)</b></td>
<td align="center">8,028</td>
<td align="center">3,503</td>
<td align="center">33,072</td>
</tr>
</tbody>
<colgroup>
<col>
<col>
<col>
<col>
<col>
<col>
<col>
<col>
<col>
</colgroup>
</table>
## MeViS v2 Dataset
**Dataset Split**
- 2,006 videos & 33,458 sentences in total;
- **Train set:** 1662 videos & 27,502 sentences, used for training;
- **Val<sup>u</sup> set:** 50 videos & 907 sentences, ground-truth provided, used for offline self-evaluation (e.g., ablation study) during training;
- **Val set:** 140 videos & 2,523 sentences, ground-truth **not** provided, used for [**CodaLab online evaluation**](https://www.codabench.org/competitions/11420/);
- **Test set:** Will be progressively and selectively released and used for evaluation during the competition periods ([PVUW](https://pvuw.github.io/), [LSVOS](https://lsvos.github.io/));
It is suggested to report the results on **Val<sup>u</sup> set** and **Val set**.
## Online Evaluation
Please submit your results of **Val set** on
- πŸ’― v1 server (Closing Soon): [**CodaLab**](https://codalab.lisn.upsaclay.fr/competitions/15094)
- πŸ’― v2 server: [**CodaBench**](https://www.codabench.org/competitions/11420/).
It is strongly suggested to first evaluate your model locally using the **Val<sup>u</sup>** set before submitting your results of the **Val** to the online evaluation system.
## File Structure
The dataset follows a similar structure as [Refer-YouTube-VOS](https://youtube-vos.org/dataset/rvos/). Each split of the dataset consists of three parts: `JPEGImages`, which holds the frame images, `meta_expressions.json`, which provides referring expressions and metadata of videos, and `mask_dict.json`, which contains the ground-truth masks of objects. Ground-truth segmentation masks are saved in the format of COCO RLE, and expressions are organized similarly like Refer-Youtube-VOS.
Please note that while annotations for all frames in the **Train** set and the **Val<sup>u</sup>** set are provided, the **Val** set only provide frame images and referring expressions for inference.
```
mevis
β”œβ”€β”€ train // Split Train
β”‚Β Β  β”œβ”€β”€ JPEGImages
β”‚ β”‚ β”œβ”€β”€ <video #1 >
β”‚ β”‚ β”œβ”€β”€ <video #2 >
β”‚ β”‚ └── <video #...>
β”‚ β”‚
β”‚Β Β  β”œβ”€β”€ mask_dict.json
β”‚Β Β  └── meta_expressions.json
β”‚
β”œβ”€β”€ valid_u // Split Val^u
β”‚Β Β  β”œβ”€β”€ JPEGImages
β”‚ β”‚ └── <video ...>
β”‚ β”‚
β”‚ β”œβ”€β”€ mask_dict.json
β”‚ └── meta_expressions.json
β”‚
└── valid // Split Val
Β Β  β”œβ”€β”€ JPEGImages
β”‚ └── <video ...>
β”‚
Β Β  └── meta_expressions.json
```
## BibTeX
Please consider to cite MeViS if it helps your research.
```latex
@inproceedings{MeViS,
title={{MeViS}: A Large-scale Benchmark for Video Segmentation with Motion Expressions},
author={Ding, Henghui and Liu, Chang and He, Shuting and Jiang, Xudong and Loy, Chen Change},
booktitle={ICCV},
year={2023}
}
```
```latex
@inproceedings{GRES,
title={{GRES}: Generalized Referring Expression Segmentation},
author={Liu, Chang and Ding, Henghui and Jiang, Xudong},
booktitle={CVPR},
year={2023}
}
```
```latex
@article{VLT,
title={{VLT}: Vision-language transformer and query generation for referring segmentation},
author={Ding, Henghui and Liu, Chang and Wang, Suchen and Jiang, Xudong},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2023},
publisher={IEEE}
}
```
A majority of videos in MeViS are from [MOSE: Complex Video Object Segmentation Dataset](https://henghuiding.github.io/MOSE/).
```latex
@inproceedings{MOSE,
title={{MOSE}: A New Dataset for Video Object Segmentation in Complex Scenes},
author={Ding, Henghui and Liu, Chang and He, Shuting and Jiang, Xudong and Torr, Philip HS and Bai, Song},
booktitle={ICCV},
year={2023}
}
```