# MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation
**[π [Project page]](https://henghuiding.github.io/MeViS/)** **[π[arXiv]](https://arxiv.org/abs/2308.08544)** **[πΎ[Evaluation Server v1 (legacy)]](https://www.codabench.org/competitions/11420/)** **[π₯[Evaluation Server v2]](https://www.codabench.org/competitions/11420/)**
This repository contains code for **ICCV2023** and **TPAMI 2025** paper:
> [MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation](https://ieeexplore.ieee.org/abstract/document/11130435)
> Henghui Ding, Chang Liu, Shuting He, Kaining Ying, Xudong Jiang, Chen Change Loy, Yu-Gang Jiang
> TPAMI 2025
> [MeViS: A Large-scale Benchmark for Video Segmentation with Motion Expressions](https://arxiv.org/abs/2308.08544)
> Henghui Ding, Chang Liu, Shuting He, Xudong Jiang, Chen Change Loy
> ICCV 2023
### Abstract
This paper proposes a large-scale multi-modal dataset for referring motion expression video segmentation, focusing on segmenting and tracking target objects in videos based on language description of objectsβ motions. Existing referring video segmentation datasets often focus on salient objects and use language expressions rich in static attributes, potentially allowing the target object to be identiο¬ed in a single frame. Such datasets underemphasize the role of motion in both videos and languages. To explore the feasibility of using motion expressions and motion reasoning clues for pixel-level video understanding, we introduce MeViS, a dataset containing 33,072 human-annotated motion expressions in both text and audio, covering 8,171 objects in 2,006 videos of complex scenarios. We benchmark 15 existing methods across 4 tasks supported by MeViS, including 6 referring video object segmentation (RVOS) methods, 3 audio-guided video object segmentation (AVOS) methods, 2 referring multi-object tracking (RMOT) methods, and 4 video captioning methods for the newly introduced referring motion expression generation (RMEG) task. The results demonstrate weaknesses and limitations of existing methods in addressing motion expression-guided video understanding. We further analyze the challenges and propose an approach LMPM++ for RVOS/AVOS/RMOT that achieves new state-of-the-art results. Our dataset provides a platform that facilitates the development of motion expression-guided video understanding algorithms in complex video scenes.

Figure 1. Examples from Motion expressions Video Segmentation (MeViS) showing the datasetβs nature and complexity. The selected target objects are masked in orange β. The expressions in MeViS primarily focus on motion attributes, making it impossible to identify the target object from a single frame. For example, the ο¬rst example has three parrots with similar appearances, and the target object is identiο¬ed as βThe bird ο¬ying awayβ. This object can only be recognized by capturing its motion throughout the video. The updated MeViS 2024 further provides motion-reasoning and no-target expressions, adds audio expressions alongside text, and provides mask and bounding box trajectory annotations.
TABLE 1. Scale comparison between MeViS and existing language-guided video segmentation datasets.
| Dataset |
Pub.&Year |
Videos |
Object |
Expression |
Mask |
Obj/Video |
Obj/Expn |
Target |
Multi-target |
No-target |
Audio |
| A2D Sentence |
CVPR 2018 |
3,782 |
4,825 |
6,656 |
58k |
1.28 |
1 |
Actor |
- |
- |
- |
| DAVIS17-RVOS |
ACCV 2018 |
90 |
205 |
205 |
13.5k |
2.27 |
1 |
Object |
- |
- |
- |
| ReferYoutubeVOS |
ECCV 2020 |
3,978 |
7,451 |
15,009 |
131k |
1.86 |
1 |
Object |
- |
- |
- |
| MeViS 2023 |
ICCV 2023 |
2,006 |
8,171 |
28,570 |
443k |
4.28 |
1.59 |
Object(s) |
7,539 |
- |
- |
| MeViS 2024 |
TPAMI |
2,006 |
8,171 |
33,072 |
443k |
4.28 |
1.58 |
Object(s) |
8,028 |
3,503 |
33,072 |
## MeViS v2 Dataset
**Dataset Split**
- 2,006 videos & 33,458 sentences in total;
- **Train set:** 1662 videos & 27,502 sentences, used for training;
- **Valu set:** 50 videos & 907 sentences, ground-truth provided, used for offline self-evaluation (e.g., ablation study) during training;
- **Val set:** 140 videos & 2,523 sentences, ground-truth **not** provided, used for [**CodaLab online evaluation**](https://www.codabench.org/competitions/11420/);
- **Test set:** Will be progressively and selectively released and used for evaluation during the competition periods ([PVUW](https://pvuw.github.io/), [LSVOS](https://lsvos.github.io/));
It is suggested to report the results on **Valu set** and **Val set**.
## Online Evaluation
Please submit your results of **Val set** on
- π― v1 server (Closing Soon): [**CodaLab**](https://codalab.lisn.upsaclay.fr/competitions/15094)
- π― v2 server: [**CodaBench**](https://www.codabench.org/competitions/11420/).
It is strongly suggested to first evaluate your model locally using the **Valu** set before submitting your results of the **Val** to the online evaluation system.
## File Structure
The dataset follows a similar structure as [Refer-YouTube-VOS](https://youtube-vos.org/dataset/rvos/). Each split of the dataset consists of three parts: `JPEGImages`, which holds the frame images, `meta_expressions.json`, which provides referring expressions and metadata of videos, and `mask_dict.json`, which contains the ground-truth masks of objects. Ground-truth segmentation masks are saved in the format of COCO RLE, and expressions are organized similarly like Refer-Youtube-VOS.
Please note that while annotations for all frames in the **Train** set and the **Valu** set are provided, the **Val** set only provide frame images and referring expressions for inference.
```
mevis
βββ train // Split Train
βΒ Β βββ JPEGImages
β β βββ