Improve dataset card: Add metadata, update paper link, remove abstract
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,10 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation
|
| 2 |
|
| 3 |
-
**[
|
| 4 |
|
| 5 |
This repository contains code for **ICCV2023** and **TPAMI 2025** paper:
|
| 6 |
|
| 7 |
-
> [MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation](https://
|
| 8 |
> Henghui Ding, Chang Liu, Shuting He, Kaining Ying, Xudong Jiang, Chen Change Loy, Yu-Gang Jiang
|
| 9 |
> TPAMI 2025
|
| 10 |
|
|
@@ -20,10 +28,6 @@ This repository contains code for **ICCV2023** and **TPAMI 2025** paper:
|
|
| 20 |
</tr>
|
| 21 |
</table>
|
| 22 |
|
| 23 |
-
### Abstract
|
| 24 |
-
|
| 25 |
-
This paper proposes a large-scale multi-modal dataset for referring motion expression video segmentation, focusing on segmenting and tracking target objects in videos based on language description of objects’ motions. Existing referring video segmentation datasets often focus on salient objects and use language expressions rich in static attributes, potentially allowing the target object to be identified in a single frame. Such datasets underemphasize the role of motion in both videos and languages. To explore the feasibility of using motion expressions and motion reasoning clues for pixel-level video understanding, we introduce MeViS, a dataset containing 33,072 human-annotated motion expressions in both text and audio, covering 8,171 objects in 2,006 videos of complex scenarios. We benchmark 15 existing methods across 4 tasks supported by MeViS, including 6 referring video object segmentation (RVOS) methods, 3 audio-guided video object segmentation (AVOS) methods, 2 referring multi-object tracking (RMOT) methods, and 4 video captioning methods for the newly introduced referring motion expression generation (RMEG) task. The results demonstrate weaknesses and limitations of existing methods in addressing motion expression-guided video understanding. We further analyze the challenges and propose an approach LMPM++ for RVOS/AVOS/RMOT that achieves new state-of-the-art results. Our dataset provides a platform that facilitates the development of motion expression-guided video understanding algorithms in complex video scenes.
|
| 26 |
-
|
| 27 |

|
| 28 |
|
| 29 |
<p style="text-align:justify; text-justify:inter-ideograph;width:100%">Figure 1. Examples from <b>M</b>otion <b>e</b>xpressions <b>Vi</b>deo <b>S</b>egmentation (<b>MeViS</b>) showing the dataset’s nature and complexity. The selected target objects are masked in <font color="#FF6403">orange ▇</font>. The expressions in MeViS primarily focus on motion attributes, making it impossible to identify the target object from a single frame. For example, the first example has three parrots with similar appearances, and the target object is identified as “<i>The bird flying away</i>”. This object can only be recognized by capturing its motion throughout the video. The updated MeViS 2024 further provides motion-reasoning and no-target expressions, adds audio expressions alongside text, and provides mask and bounding box trajectory annotations.</p>
|
|
@@ -226,6 +230,4 @@ A majority of videos in MeViS are from [MOSE: Complex Video Object Segmentation
|
|
| 226 |
booktitle={ICCV},
|
| 227 |
year={2023}
|
| 228 |
}
|
| 229 |
-
```
|
| 230 |
-
|
| 231 |
-
MeViS is licensed under a CC BY-NC-SA 4.0 License. The data of MeViS is released for non-commercial research purpose only.
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- image-segmentation
|
| 4 |
+
- object-detection
|
| 5 |
+
- video-text-to-text
|
| 6 |
+
license: cc-by-nc-sa-4.0
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
# MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation
|
| 10 |
|
| 11 |
+
**[\ud83c\udfe0[Project page]](https://henghuiding.github.io/MeViS/)**  **[\ud83d\udcc4[Paper]](https://huggingface.co/papers/2512.10945)**   **[\ud83d\udcc4[arXiv]](https://arxiv.org/abs/2308.08544)**   **[\ud83d\udcbe[Evaluation Server v1 (legacy)]](https://www.codabench.org/competitions/11420/)**  **[\ud83d\udd25[Evaluation Server v2]](https://www.codabench.org/competitions/11420/)**
|
| 12 |
|
| 13 |
This repository contains code for **ICCV2023** and **TPAMI 2025** paper:
|
| 14 |
|
| 15 |
+
> [MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation](https://huggingface.co/papers/2512.10945)
|
| 16 |
> Henghui Ding, Chang Liu, Shuting He, Kaining Ying, Xudong Jiang, Chen Change Loy, Yu-Gang Jiang
|
| 17 |
> TPAMI 2025
|
| 18 |
|
|
|
|
| 28 |
</tr>
|
| 29 |
</table>
|
| 30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |

|
| 32 |
|
| 33 |
<p style="text-align:justify; text-justify:inter-ideograph;width:100%">Figure 1. Examples from <b>M</b>otion <b>e</b>xpressions <b>Vi</b>deo <b>S</b>egmentation (<b>MeViS</b>) showing the dataset’s nature and complexity. The selected target objects are masked in <font color="#FF6403">orange ▇</font>. The expressions in MeViS primarily focus on motion attributes, making it impossible to identify the target object from a single frame. For example, the first example has three parrots with similar appearances, and the target object is identified as “<i>The bird flying away</i>”. This object can only be recognized by capturing its motion throughout the video. The updated MeViS 2024 further provides motion-reasoning and no-target expressions, adds audio expressions alongside text, and provides mask and bounding box trajectory annotations.</p>
|
|
|
|
| 230 |
booktitle={ICCV},
|
| 231 |
year={2023}
|
| 232 |
}
|
| 233 |
+
```
|
|
|
|
|
|