HanleiZhang commited on
Commit
240c6af
·
verified ·
1 Parent(s): be76504

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -23,7 +23,7 @@ We also build baselines with three evaluation methods (i.e., zero-shot inference
23
 
24
  ## 2. Datasets
25
 
26
- ### Statistics
27
 
28
  Dataset statistics for each dimension in the MMLA benchmark. #C, #U, #Train, #Val, and #Test represent the number of label classes, utterances, training, validation, and testing samples, respectively. avg. and max. refer to the average and maximum lengths.
29
  | **Dimensions** | **Datasets** | **#C** | **#U** | **#Train** | **#Val** | **#Test** | **Video Hours** | **Source** | **#Video Length (avg. / max.)** | **#Text Length (avg. / max.)** | **Language** |
@@ -41,14 +41,14 @@ Dataset statistics for each dimension in the MMLA benchmark. #C, #U, #Train, #Va
41
  | **Communication Behavior**| Anno-MI (client) | 3 | 4,713 | 3,123 | 461 | 1,128 | 10.8 | YouTube & Vimeo | 8.2 / 600.0 | 16.3 / 266.0 | **English** |
42
  | | Anno-MI (therapist) | 4 | 4,773 | 3,161 | 472 | 1,140 | 12.1 | | 9.1 / 1316.1 | 17.9 / 205.0 | |
43
 
44
- ### License
45
 
46
  This benchmark uses nine datasets, each of which is employed strictly in accordance with its official license and exclusively for academic research purposes. We fully respect the datasets’ copyright policies, license requirements, and ethical standards. For those datasets whose licenses explicitly permit redistribution, we release the original video data (e.g., [MIntRec](https://github.com/thuiar/MIntRec), [MIntRec2.0](https://github.com/thuiar/MIntRec2.0), [MELD](https://github.com/declare-lab/MELD), [UR-FUNNY-v2](https://github.com/ROC-HCI/UR-FUNNY), [MUStARD](https://github.com/soujanyaporia/MUStARD), [MELD-DA](https://github.com/sahatulika15/EMOTyDA), [CH-SIMS v2.0](https://github.com/thuiar/ch-sims-v2), and [Anno-MI](https://github.com/uccollab/AnnoMI). For datasets that restrict video redistribution, users should obtain the videos directly from their official repositories (e.g., [MOSI](https://github.com/matsuolab/CMU-MultimodalSDK), [IEMOCAP and IEMOCAP-DA](https://sail.usc.edu/iemocap). In compliance with all relevant licenses, we also provide the original textual data unchanged, together with the specific dataset splits used in our experiments. This approach ensures reproducibility and academic transparency while strictly adhering to copyright obligations and protecting the privacy of individuals featured in the videos.
47
 
48
 
49
  ## 3. LeaderBoard
50
 
51
- Rank of Zero-shot Inference:
52
 
53
  | RANK | Models | ACC | TYPE |
54
  | :--: | :--------------: | :---: | :--: |
@@ -67,7 +67,7 @@ Rank of Zero-shot Inference:
67
  | 13 | MiniCPM-V-2.6-8B | 37.03 | MLLM |
68
  | 14 | Qwen2-0.5B | 22.14 | LLM |
69
 
70
- Rank of Supervised Fine-tuning (SFT) and Instruction Tuning (IT)
71
 
72
  | Rank | Models | ACC | Type |
73
  | :--: | :--------------------: | :---: | :--: |
 
23
 
24
  ## 2. Datasets
25
 
26
+ #### 2.1 Statistics
27
 
28
  Dataset statistics for each dimension in the MMLA benchmark. #C, #U, #Train, #Val, and #Test represent the number of label classes, utterances, training, validation, and testing samples, respectively. avg. and max. refer to the average and maximum lengths.
29
  | **Dimensions** | **Datasets** | **#C** | **#U** | **#Train** | **#Val** | **#Test** | **Video Hours** | **Source** | **#Video Length (avg. / max.)** | **#Text Length (avg. / max.)** | **Language** |
 
41
  | **Communication Behavior**| Anno-MI (client) | 3 | 4,713 | 3,123 | 461 | 1,128 | 10.8 | YouTube & Vimeo | 8.2 / 600.0 | 16.3 / 266.0 | **English** |
42
  | | Anno-MI (therapist) | 4 | 4,773 | 3,161 | 472 | 1,140 | 12.1 | | 9.1 / 1316.1 | 17.9 / 205.0 | |
43
 
44
+ #### 2.2 License
45
 
46
  This benchmark uses nine datasets, each of which is employed strictly in accordance with its official license and exclusively for academic research purposes. We fully respect the datasets’ copyright policies, license requirements, and ethical standards. For those datasets whose licenses explicitly permit redistribution, we release the original video data (e.g., [MIntRec](https://github.com/thuiar/MIntRec), [MIntRec2.0](https://github.com/thuiar/MIntRec2.0), [MELD](https://github.com/declare-lab/MELD), [UR-FUNNY-v2](https://github.com/ROC-HCI/UR-FUNNY), [MUStARD](https://github.com/soujanyaporia/MUStARD), [MELD-DA](https://github.com/sahatulika15/EMOTyDA), [CH-SIMS v2.0](https://github.com/thuiar/ch-sims-v2), and [Anno-MI](https://github.com/uccollab/AnnoMI). For datasets that restrict video redistribution, users should obtain the videos directly from their official repositories (e.g., [MOSI](https://github.com/matsuolab/CMU-MultimodalSDK), [IEMOCAP and IEMOCAP-DA](https://sail.usc.edu/iemocap). In compliance with all relevant licenses, we also provide the original textual data unchanged, together with the specific dataset splits used in our experiments. This approach ensures reproducibility and academic transparency while strictly adhering to copyright obligations and protecting the privacy of individuals featured in the videos.
47
 
48
 
49
  ## 3. LeaderBoard
50
 
51
+ #### 3.1 Rank of Zero-shot Inference:
52
 
53
  | RANK | Models | ACC | TYPE |
54
  | :--: | :--------------: | :---: | :--: |
 
67
  | 13 | MiniCPM-V-2.6-8B | 37.03 | MLLM |
68
  | 14 | Qwen2-0.5B | 22.14 | LLM |
69
 
70
+ #### 3.2 Rank of Supervised Fine-tuning (SFT) and Instruction Tuning (IT)
71
 
72
  | Rank | Models | ACC | Type |
73
  | :--: | :--------------------: | :---: | :--: |