Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -45,3 +45,63 @@ Dataset statistics for each dimension in the MMLA benchmark. #C, #U, #Train, #Va
|
|
| 45 |
### License
|
| 46 |
|
| 47 |
This benchmark uses nine datasets, each of which is employed strictly in accordance with its official license and exclusively for academic research purposes. We fully respect the datasets’ copyright policies, license requirements, and ethical standards. For those datasets whose licenses explicitly permit redistribution, we release the original video data (e.g., [MIntRec](https://github.com/thuiar/MIntRec), [MIntRec2.0](https://github.com/thuiar/MIntRec2.0), [MELD](https://github.com/declare-lab/MELD), [UR-FUNNY-v2](https://github.com/ROC-HCI/UR-FUNNY), [MUStARD](https://github.com/soujanyaporia/MUStARD), [MELD-DA](https://github.com/sahatulika15/EMOTyDA), [CH-SIMS v2.0](https://github.com/thuiar/ch-sims-v2), and [Anno-MI](https://github.com/uccollab/AnnoMI). For datasets that restrict video redistribution, users should obtain the videos directly from their official repositories (e.g., [MOSI](https://github.com/matsuolab/CMU-MultimodalSDK), [IEMOCAP and IEMOCAP-DA](https://sail.usc.edu/iemocap). In compliance with all relevant licenses, we also provide the original textual data unchanged, together with the specific dataset splits used in our experiments. This approach ensures reproducibility and academic transparency while strictly adhering to copyright obligations and protecting the privacy of individuals featured in the videos.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
### License
|
| 46 |
|
| 47 |
This benchmark uses nine datasets, each of which is employed strictly in accordance with its official license and exclusively for academic research purposes. We fully respect the datasets’ copyright policies, license requirements, and ethical standards. For those datasets whose licenses explicitly permit redistribution, we release the original video data (e.g., [MIntRec](https://github.com/thuiar/MIntRec), [MIntRec2.0](https://github.com/thuiar/MIntRec2.0), [MELD](https://github.com/declare-lab/MELD), [UR-FUNNY-v2](https://github.com/ROC-HCI/UR-FUNNY), [MUStARD](https://github.com/soujanyaporia/MUStARD), [MELD-DA](https://github.com/sahatulika15/EMOTyDA), [CH-SIMS v2.0](https://github.com/thuiar/ch-sims-v2), and [Anno-MI](https://github.com/uccollab/AnnoMI). For datasets that restrict video redistribution, users should obtain the videos directly from their official repositories (e.g., [MOSI](https://github.com/matsuolab/CMU-MultimodalSDK), [IEMOCAP and IEMOCAP-DA](https://sail.usc.edu/iemocap). In compliance with all relevant licenses, we also provide the original textual data unchanged, together with the specific dataset splits used in our experiments. This approach ensures reproducibility and academic transparency while strictly adhering to copyright obligations and protecting the privacy of individuals featured in the videos.
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
## LeaderBoard
|
| 51 |
+
|
| 52 |
+
### Rank of Zero-shot Inference
|
| 53 |
+
|
| 54 |
+
| RANK | Models | ACC | TYPE |
|
| 55 |
+
| :--: | :--------------: | :---: | :--: |
|
| 56 |
+
| 🥇 | GPT-4o | 52.60 | MLLM |
|
| 57 |
+
| 🥈 | Qwen2-VL-72B | 52.55 | MLLM |
|
| 58 |
+
| 🥉 | LLaVA-OV-72B | 52.44 | MLLM |
|
| 59 |
+
| 4 | LLaVA-Video-72B | 51.64 | MLLM |
|
| 60 |
+
| 5 | InternLM2.5-7B | 50.28 | LLM |
|
| 61 |
+
| 6 | Qwen2-7B | 48.45 | LLM |
|
| 62 |
+
| 7 | Qwen2-VL-7B | 47.12 | MLLM |
|
| 63 |
+
| 8 | Llama3-8B | 44.06 | LLM |
|
| 64 |
+
| 9 | LLaVA-Video-7B | 43.32 | MLLM |
|
| 65 |
+
| 10 | VideoLLaMA2-7B | 42.82 | MLLM |
|
| 66 |
+
| 11 | LLaVA-OV-7B | 40.65 | MLLM |
|
| 67 |
+
| 12 | Qwen2-1.5B | 40.61 | LLM |
|
| 68 |
+
| 13 | MiniCPM-V-2.6-8B | 37.03 | MLLM |
|
| 69 |
+
| 14 | Qwen2-0.5B | 22.14 | LLM |
|
| 70 |
+
|
| 71 |
+
### Rank of Supervised Fine-tuning (SFT) and Instruction Tuning (IT)
|
| 72 |
+
|
| 73 |
+
| Rank | Models | ACC | Type |
|
| 74 |
+
| :--: | :--------------------: | :---: | :--: |
|
| 75 |
+
| 🥇 | Qwen2-VL-72B (SFT) | 69.18 | MLLM |
|
| 76 |
+
| 🥈 | MiniCPM-V-2.6-8B (SFT) | 68.88 | MLLM |
|
| 77 |
+
| 🥉 | LLaVA-Video-72B (IT) | 68.87 | MLLM |
|
| 78 |
+
| 4 | LLaVA-ov-72B (SFT) | 68.67 | MLLM |
|
| 79 |
+
| 5 | Qwen2-VL-72B (IT) | 68.64 | MLLM |
|
| 80 |
+
| 6 | LLaVA-Video-72B (SFT) | 68.44 | MLLM |
|
| 81 |
+
| 7 | VideoLLaMA2-7B (SFT) | 68.30 | MLLM |
|
| 82 |
+
| 8 | Qwen2-VL-7B (SFT) | 67.60 | MLLM |
|
| 83 |
+
| 9 | LLaVA-ov-7B (SFT) | 67.54 | MLLM |
|
| 84 |
+
| 10 | LLaVA-Video-7B (SFT) | 67.47 | MLLM |
|
| 85 |
+
| 11 | Qwen2-VL-7B (IT) | 67.34 | MLLM |
|
| 86 |
+
| 12 | MiniCPM-V-2.6-8B (IT) | 67.25 | MLLM |
|
| 87 |
+
| 13 | Llama-3-8B (SFT) | 66.18 | LLM |
|
| 88 |
+
| 14 | Qwen2-7B (SFT) | 66.15 | LLM |
|
| 89 |
+
| 15 | Internlm-2.5-7B (SFT) | 65.72 | LLM |
|
| 90 |
+
| 16 | Qwen-2-7B (IT) | 64.58 | LLM |
|
| 91 |
+
| 17 | Internlm-2.5-7B (IT) | 64.41 | LLM |
|
| 92 |
+
| 18 | Llama-3-8B (IT) | 64.16 | LLM |
|
| 93 |
+
| 19 | Qwen2-1.5B (SFT) | 64.00 | LLM |
|
| 94 |
+
| 20 | Qwen2-0.5B (SFT) | 62.80 | LLM |
|
| 95 |
+
|
| 96 |
+
## Acknowledgements
|
| 97 |
+
|
| 98 |
+
For more details, please refer to our [Github repo](https://github.com/thuiar/MMLA). If our work is helpful to your research, please consider citing the following paper:
|
| 99 |
+
|
| 100 |
+
```
|
| 101 |
+
@article{zhang2025mmla,
|
| 102 |
+
author={Zhang, Hanlei and Li, Zhuohang and Zhu, Yeshuang and Xu, Hua and Wang, Peiwu and Zhu, Haige and Zhou, Jie and Zhang, Jinchao},
|
| 103 |
+
title={Can Large Language Models Help Multimodal Language Analysis? MMLA: A Comprehensive Benchmark},
|
| 104 |
+
year={2025},
|
| 105 |
+
journal={arXiv preprint arXiv:2504.16427},
|
| 106 |
+
}
|
| 107 |
+
```
|