HanleiZhang commited on
Commit
a9ee241
·
verified ·
1 Parent(s): 1055d77

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -9,7 +9,7 @@ size_categories:
9
  ---
10
  # Can Large Language Models Help Multimodal Language Analysis? MMLA: A Comprehensive Benchmark
11
 
12
- ## Introduction
13
 
14
  MMLA is the first comprehensive multimodal language analysis benchmark for evaluating foundation models. It has the following features:
15
 
@@ -21,7 +21,7 @@ MMLA is the first comprehensive multimodal language analysis benchmark for evalu
21
 
22
  We also build baselines with three evaluation methods (i.e., zero-shot inference, supervised fine-tuning, and instruction tuning) on 8 mainstream foundation models (i.e., 5 MLLMs (Qwen2-VL, VideoLLaMA2, LLaVA-Video, LLaVA-OV, MiniCPM-V-2.6), 3 LLMs (InternLM2.5, Qwen2, LLaMA3). More details can refer to our [paper](https://arxiv.org/abs/2504.16427).
23
 
24
- ## Datasets
25
 
26
  ### Statistics
27
 
@@ -47,7 +47,7 @@ Dataset statistics for each dimension in the MMLA benchmark. #C, #U, #Train, #Va
47
  This benchmark uses nine datasets, each of which is employed strictly in accordance with its official license and exclusively for academic research purposes. We fully respect the datasets’ copyright policies, license requirements, and ethical standards. For those datasets whose licenses explicitly permit redistribution, we release the original video data (e.g., [MIntRec](https://github.com/thuiar/MIntRec), [MIntRec2.0](https://github.com/thuiar/MIntRec2.0), [MELD](https://github.com/declare-lab/MELD), [UR-FUNNY-v2](https://github.com/ROC-HCI/UR-FUNNY), [MUStARD](https://github.com/soujanyaporia/MUStARD), [MELD-DA](https://github.com/sahatulika15/EMOTyDA), [CH-SIMS v2.0](https://github.com/thuiar/ch-sims-v2), and [Anno-MI](https://github.com/uccollab/AnnoMI). For datasets that restrict video redistribution, users should obtain the videos directly from their official repositories (e.g., [MOSI](https://github.com/matsuolab/CMU-MultimodalSDK), [IEMOCAP and IEMOCAP-DA](https://sail.usc.edu/iemocap). In compliance with all relevant licenses, we also provide the original textual data unchanged, together with the specific dataset splits used in our experiments. This approach ensures reproducibility and academic transparency while strictly adhering to copyright obligations and protecting the privacy of individuals featured in the videos.
48
 
49
 
50
- ## LeaderBoard
51
 
52
  ### Rank of Zero-shot Inference
53
 
@@ -93,7 +93,7 @@ This benchmark uses nine datasets, each of which is employed strictly in accorda
93
  | 19 | Qwen2-1.5B (SFT) | 64.00 | LLM |
94
  | 20 | Qwen2-0.5B (SFT) | 62.80 | LLM |
95
 
96
- ## Acknowledgements
97
 
98
  For more details, please refer to our [Github repo](https://github.com/thuiar/MMLA). If our work is helpful to your research, please consider citing the following paper:
99
 
 
9
  ---
10
  # Can Large Language Models Help Multimodal Language Analysis? MMLA: A Comprehensive Benchmark
11
 
12
+ ## 1. Introduction
13
 
14
  MMLA is the first comprehensive multimodal language analysis benchmark for evaluating foundation models. It has the following features:
15
 
 
21
 
22
  We also build baselines with three evaluation methods (i.e., zero-shot inference, supervised fine-tuning, and instruction tuning) on 8 mainstream foundation models (i.e., 5 MLLMs (Qwen2-VL, VideoLLaMA2, LLaVA-Video, LLaVA-OV, MiniCPM-V-2.6), 3 LLMs (InternLM2.5, Qwen2, LLaMA3). More details can refer to our [paper](https://arxiv.org/abs/2504.16427).
23
 
24
+ ## 2. Datasets
25
 
26
  ### Statistics
27
 
 
47
  This benchmark uses nine datasets, each of which is employed strictly in accordance with its official license and exclusively for academic research purposes. We fully respect the datasets’ copyright policies, license requirements, and ethical standards. For those datasets whose licenses explicitly permit redistribution, we release the original video data (e.g., [MIntRec](https://github.com/thuiar/MIntRec), [MIntRec2.0](https://github.com/thuiar/MIntRec2.0), [MELD](https://github.com/declare-lab/MELD), [UR-FUNNY-v2](https://github.com/ROC-HCI/UR-FUNNY), [MUStARD](https://github.com/soujanyaporia/MUStARD), [MELD-DA](https://github.com/sahatulika15/EMOTyDA), [CH-SIMS v2.0](https://github.com/thuiar/ch-sims-v2), and [Anno-MI](https://github.com/uccollab/AnnoMI). For datasets that restrict video redistribution, users should obtain the videos directly from their official repositories (e.g., [MOSI](https://github.com/matsuolab/CMU-MultimodalSDK), [IEMOCAP and IEMOCAP-DA](https://sail.usc.edu/iemocap). In compliance with all relevant licenses, we also provide the original textual data unchanged, together with the specific dataset splits used in our experiments. This approach ensures reproducibility and academic transparency while strictly adhering to copyright obligations and protecting the privacy of individuals featured in the videos.
48
 
49
 
50
+ ## 3. LeaderBoard
51
 
52
  ### Rank of Zero-shot Inference
53
 
 
93
  | 19 | Qwen2-1.5B (SFT) | 64.00 | LLM |
94
  | 20 | Qwen2-0.5B (SFT) | 62.80 | LLM |
95
 
96
+ ## 4. Acknowledgements
97
 
98
  For more details, please refer to our [Github repo](https://github.com/thuiar/MMLA). If our work is helpful to your research, please consider citing the following paper:
99