Commit
·
0c13cd8
verified
·
0
Parent(s):
Duplicate from NUSTM/ECF
Browse filesCo-authored-by: Fanfan Wang <ffwang@users.noreply.huggingface.co>
- .gitattributes +58 -0
- README.md +79 -0
- SemEval-2024/README.md +20 -0
- SemEval-2024/Subtask_1_test.json +0 -0
- SemEval-2024/Subtask_1_train.json +0 -0
- SemEval-2024/Subtask_2_test.json +0 -0
- SemEval-2024/Subtask_2_train.json +0 -0
- span/README.txt +0 -0
- span/dev.json +0 -0
- span/test.json +0 -0
- span/train.json +0 -0
- utterance/README.md +0 -0
- utterance/dev.json +0 -0
- utterance/test.json +0 -0
- utterance/train.json +0 -0
.gitattributes
ADDED
|
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
# Audio files - uncompressed
|
| 38 |
+
*.pcm filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
*.sam filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
*.raw filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
# Audio files - compressed
|
| 42 |
+
*.aac filter=lfs diff=lfs merge=lfs -text
|
| 43 |
+
*.flac filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
| 45 |
+
*.ogg filter=lfs diff=lfs merge=lfs -text
|
| 46 |
+
*.wav filter=lfs diff=lfs merge=lfs -text
|
| 47 |
+
# Image files - uncompressed
|
| 48 |
+
*.bmp filter=lfs diff=lfs merge=lfs -text
|
| 49 |
+
*.gif filter=lfs diff=lfs merge=lfs -text
|
| 50 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
| 51 |
+
*.tiff filter=lfs diff=lfs merge=lfs -text
|
| 52 |
+
# Image files - compressed
|
| 53 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 54 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 55 |
+
*.webp filter=lfs diff=lfs merge=lfs -text
|
| 56 |
+
# Video files - compressed
|
| 57 |
+
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 58 |
+
*.webm filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: gpl-3.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
tags:
|
| 6 |
+
- emotion-cause-analysis
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
# Emotion-Cause-in-Friends (ECF)
|
| 10 |
+
|
| 11 |
+
For the task named Multimodal Emotion-Cause Pair Extraction in Conversation, we accordingly construct a multimodal conversational emotion cause dataset ECF (1.0). For SemEval 2024 task 3, we have furthermore annotated an extended test set as the evaluation data. *Note*: ECF (1.0) and the extended test set for SemEval evaluation constitute ECF 2.0.
|
| 12 |
+
|
| 13 |
+
For more details, please refer to our GitHub:
|
| 14 |
+
|
| 15 |
+
- [Multimodal Emotion-Cause Pair Extraction in Conversations](https://github.com/NUSTM/MECPE/tree/main/data)
|
| 16 |
+
- [SemEval-2024 Task 3](https://github.com/NUSTM/SemEval-2024_ECAC)
|
| 17 |
+
|
| 18 |
+
## Dataset Statistics
|
| 19 |
+
|
| 20 |
+
| Item | Train | Dev | Test | Total | Evaluation Data for SemEval-2024 Task 3 |
|
| 21 |
+
| ------------------------------- | ----- | ----- | ----- | ------ | ------ |
|
| 22 |
+
| Conversations | 1001 | 112 | 261 | 1,374 | 341 |
|
| 23 |
+
| Utterances | 9,966 | 1,087 | 2,566 | 13,619 | 3,101 |
|
| 24 |
+
| Emotion (utterances) | 5,577 | 668 | 1,445 | 7,690 | 1,821 |
|
| 25 |
+
| Emotion-cause (utterance) pairs | 7,055 | 866 | 1,873 | 9,794 | 2,462 |
|
| 26 |
+
|
| 27 |
+
## Supported Tasks
|
| 28 |
+
|
| 29 |
+
- Multimodal Emotion Recognition in Conversation (ERC)
|
| 30 |
+
- Causal/Cause Span Extraction (CSE)
|
| 31 |
+
- Emotion Cause Extraction (ECE) / Causal Emotion Entailment (CEE)
|
| 32 |
+
- Multimodal Emotion-Cause Pair Extraction in Conversation (MECPE)
|
| 33 |
+
- ...
|
| 34 |
+
|
| 35 |
+
## About Multimodal Data
|
| 36 |
+
|
| 37 |
+
⚠️ Due to potential copyright issues with the TV show "Friends", we cannot provide pre-segmented video clips.
|
| 38 |
+
|
| 39 |
+
If you need to utilize multimodal data, you may consider the following options:
|
| 40 |
+
|
| 41 |
+
1. Use the acoustic and visual features we provide:
|
| 42 |
+
- [`audio_embedding_6373.npy`](https://drive.google.com/file/d/1EhU2jFSr_Vi67Wdu1ARJozrTJtgiQrQI/view?usp=share_link): the embedding table composed of the 6373-dimensional acoustic features of each utterances extracted with openSMILE
|
| 43 |
+
- [`video_embedding_4096.npy`](https://drive.google.com/file/d/1NGSsiQYDTqgen_g9qndSuha29JA60x14/view?usp=share_link): the embedding table composed of the 4096-dimensional visual features of each utterances extracted with 3D-CNN
|
| 44 |
+
- Please note that the above features only include the original ECF (1.0) dataset; the SemEval evaluation data is not included. If needed, you can contact us, and we will do our best to release new features.
|
| 45 |
+
|
| 46 |
+
2. You can download the raw video clips from [MELD](https://github.com/declare-lab/MELD). Since ECF (1.0) is constructed based on the MELD dataset, most utterances in ECF (1.0) correspond to those in MELD. The correspondence can be found in the last column of the file [all_data_pair_ECFvsMELD.txt](https://github.com/NUSTM/MECPE/blob/main/data/all_data_pair_ECFvsMELD.txt). However, **we have made certain modifications to MELD's raw data while constructing ECF, including but not limited to editing utterance text, adjusting timestamps, and adding or removing utterances**. Therefore, some timestamps provided in ECF (1.0) have been corrected and may differ from those in MELD. There are also new utterances that cannot be found in MELD. Given this, we recommend option (3) if feasible.
|
| 47 |
+
|
| 48 |
+
3. Download the raw videos of _Friends_ from the website, and use the FFmpeg toolkit to extract audio-visual clips of each utterance based on the timestamps we provide.
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
## Citation
|
| 53 |
+
|
| 54 |
+
If you find ECF useful for your research, please cite our paper using the following BibTeX entries:
|
| 55 |
+
|
| 56 |
+
```
|
| 57 |
+
@ARTICLE{wang2023multimodal,
|
| 58 |
+
author={Wang, Fanfan and Ding, Zixiang and Xia, Rui and Li, Zhaoyu and Yu, Jianfei},
|
| 59 |
+
journal={IEEE Transactions on Affective Computing},
|
| 60 |
+
title={Multimodal Emotion-Cause Pair Extraction in Conversations},
|
| 61 |
+
year={2023},
|
| 62 |
+
volume={14},
|
| 63 |
+
number={3},
|
| 64 |
+
pages={1832-1844},
|
| 65 |
+
doi = {10.1109/TAFFC.2022.3226559}
|
| 66 |
+
}
|
| 67 |
+
|
| 68 |
+
@InProceedings{wang2024SemEval,
|
| 69 |
+
author={Wang, Fanfan and Ma, Heqing and Xia, Rui and Yu, Jianfei and Cambria, Erik},
|
| 70 |
+
title={SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations},
|
| 71 |
+
booktitle={Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)},
|
| 72 |
+
month={June},
|
| 73 |
+
year={2024},
|
| 74 |
+
address={Mexico City, Mexico},
|
| 75 |
+
publisher={Association for Computational Linguistics},
|
| 76 |
+
pages={2022--2033},
|
| 77 |
+
url = {https://aclanthology.org/2024.semeval2024-1.273}
|
| 78 |
+
}
|
| 79 |
+
```
|
SemEval-2024/README.md
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Dataset for SemEval-2024 Task 3
|
| 2 |
+
|
| 3 |
+
The dataset ECF 2.0 for [**SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations**](https://aclanthology.org/2024.semeval-1.277) is released here.
|
| 4 |
+
|
| 5 |
+
In our preliminary work [MECPE](https://github.com/NUSTM/MECPE), we constructed the **ECF (1.0)** dataset. For this SemEval competition, **the entire ECF (1.0) dataset serves as the training data**, and we have **additionally annotated a new test set** as the evaluation data, together constituting the **ECF 2.0** dataset.
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
## File Description
|
| 9 |
+
|
| 10 |
+
- The files `Subtask_1_train.json` and `Subtask_2_train.json` contain the training data for two subtasks in SemEval 2024, where all instances are stacked into a list and each of them is stored in a dictionary. The files `Subtask_1_test.json` and `Subtask_2_test.json` contain the evaluation data, using the same format.
|
| 11 |
+
- For Subtask 2, you can refer to the JSON files to obtain **timestamps** and download the raw videos from the website to process the multimodal data. ⚠️ **Due to potential copyright issues, we do not provide pre-segmented video clips.**
|
| 12 |
+
|
| 13 |
+
📢 Note: To ensure the fairness of our competition, the evaluation data released during the evaluation phase of SemEval 2024 included some noise data that was not intended for evaluation. In the newly released version, the noise data has been removed, and the labels have been made publicly available.
|
| 14 |
+
|
| 15 |
+
## Dataset Statistics
|
| 16 |
+
|
| 17 |
+
| | Training Data | Evaluation data |
|
| 18 |
+
| ----------------------------------------- | ----------------------------------------- | --------------------------------------- |
|
| 19 |
+
| # of conversations | 1,374 | 341 |
|
| 20 |
+
| # of utterances | 13,619 | 3,101 |
|
SemEval-2024/Subtask_1_test.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
SemEval-2024/Subtask_1_train.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
SemEval-2024/Subtask_2_test.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
SemEval-2024/Subtask_2_train.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
span/README.txt
ADDED
|
File without changes
|
span/dev.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
span/test.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
span/train.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
utterance/README.md
ADDED
|
File without changes
|
utterance/dev.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
utterance/test.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
utterance/train.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|