lossminimilization orrzohar commited on
Commit
ac9fb2d
·
0 Parent(s):

Duplicate from orrzohar/EMID-Emotion-Matching

Browse files

Co-authored-by: Orr Zohar <orrzohar@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_name: EMID-Emotion-Matching
3
+ annotations_creators:
4
+ - expert-generated
5
+ language:
6
+ - en
7
+ license: cc-by-nc-sa-4.0
8
+ pretty_name: EMID Music ↔ Image Emotion Matching Pairs
9
+ tags:
10
+ - audio
11
+ - music
12
+ - image
13
+ - multimodal
14
+ - emotion
15
+ - contrastive-learning
16
+ task_categories:
17
+ - audio-classification
18
+ - image-classification
19
+ - visual-question-answering
20
+ ---
21
+
22
+ # EMID-Emotion-Matching
23
+
24
+ `orrzohar/EMID-Emotion-Matching` is a derived dataset built on top of
25
+ the **Emotionally paired Music and Image Dataset (EMID)** from ECNU (`ecnu-aigc/EMID`).
26
+ It is designed for *music ↔ image emotion matching* with Qwen-Omni–style models.
27
+
28
+ Each example contains:
29
+
30
+ - `audio`: mono waveform stored as `datasets.Audio` (HF Hub preview can play it)
31
+ - `sampling_rate`: sampling rate used when decoding (typically 16 kHz)
32
+ - `image`: a single image (`datasets.Image`)
33
+ - `same`: `bool`, whether the audio and image are labeled with the **same** emotion
34
+ - `emotion`: normalized image emotion tag (e.g. `amusement`, `excitement`) for positive pairs; empty string for negatives
35
+ - `question`: natural-language question used to prompt the model (several templates are mixed)
36
+ - `answer`: canonical supervision text (`yes - {emotion}` for positives, `no` for negatives)
37
+
38
+ | column | type | description |
39
+ | -------------- | ------------------------------- | ----------- |
40
+ | `audio` | `datasets.Audio (16k mono)` | decoded waveform; HF UI can play it |
41
+ | `sampling_rate`| `int32` | explicit sample rate mirrored beside the `Audio` column |
42
+ | `image` | `datasets.Image` | PIL.Image-compatible object |
43
+ | `same` | `bool` | `True` if the pair is emotion-aligned |
44
+ | `emotion` | `string` | normalized emotion label for positives, `""` otherwise |
45
+ | `question` | `string` | user prompt template |
46
+ | `answer` | `string` | canonical supervision text (`yes - {emotion}` / `no`) |
47
+
48
+ The original EMID row has one music clip and up to **three** tagged images
49
+ (`Image1`, `Image2`, `Image3`). For each `(audio, image)` pair we create:
50
+
51
+ - **1 positive example**: the audio and its own tagged image (`same = True`, `emotion = image_tag`)
52
+ - **NEGATIVES_PER_POSITIVE = 1 negative example**: the same audio paired with an image drawn
53
+ from a *different* emotion tag (`same = False`, `emotion = ""`)
54
+
55
+ With `MAX_SOURCE_ROWS = 4000`, this yields ~24,000 examples (positives + negatives),
56
+ which we then split into:
57
+
58
+ - `train`: 19,200 examples
59
+ - `test`: 4,800 examples
60
+
61
+ ## Source Data (EMID)
62
+
63
+ The base EMID dataset is described in:
64
+
65
+ - **Emotionally paired Music and Image Dataset (EMID)**
66
+ *Y. Guo, J. Li, et al.*
67
+ arXiv:2308.07622 — "Emotionally paired Music and Image Dataset (EMID)"
68
+ <https://arxiv.org/abs/2308.07622>
69
+
70
+ EMID contains 10,738 unique music clips, each paired with three images in the same
71
+ emotional category, plus rich annotations:
72
+
73
+ - `Audio_Filename`: unique filename of the music clip
74
+ - `genre`: letter A–M, one of 13 emotional categories
75
+ - `feeling`: distribution of free-form feelings reported by listeners (% per feeling)
76
+ - `emotion`: ratings on 11 emotional dimensions (1–9)
77
+ - `Image{1,2,3}_filename`: matched image filenames
78
+ - `Image{1,2,3}_tag`: image emotion category (e.g. `amusement`, `excitement`)
79
+ - `Image{1,2,3}_text`: GIT-generated captions
80
+ - `is_original_clip`: whether this is an original or expanded clip
81
+
82
+ For more details, see the EMID README and the paper above.
83
+
84
+ ## How This Derived Dataset Was Built
85
+
86
+ The script `prepare_emid_pairs.py` performs the following steps offline:
87
+
88
+ 1. Load `ecnu-aigc/EMID` (train split) and decode:
89
+ - `Audio_Filename` with `Audio(decode=True)`
90
+ - `Image{1,2,3}_filename` with `datasets.Image(decode=True)`
91
+ 2. Optionally cap the number of source rows with `MAX_SOURCE_ROWS` (default 4000).
92
+ 3. Build an **image pool** keyed by normalized emotion tags.
93
+ 4. For each EMID row and each available image (up to 3 per row):
94
+ - Create a positive pair `(audio, image, same=True, emotion=image_tag)`.
95
+ - Sample `NEGATIVES_PER_POSITIVE` images from *different* emotion tags to form negatives.
96
+ 5. Normalize the emotion strings (lowercase, replace spaces and punctuation with `_`).
97
+ 6. Draw a random question from a small set of Qwen-style templates and attach it as `question`.
98
+ 7. Store the mono waveform as `datasets.Audio` and the image as `datasets.Image` so
99
+ that downstream scripts can call `datasets.load_dataset` without extra decoding logic.
100
+ 8. Split into train/test with `TRAIN_FRACTION = 0.8`.
101
+
102
+ This yields a simple, flat structure that is convenient for SFT / contrastive training
103
+ with Qwen2.5-Omni (or other multimodal LMs), without re-doing negative sampling or
104
+ audio/image decoding inside notebooks.
105
+
106
+ ## Suggested Usage
107
+
108
+ ```python
109
+ from datasets import load_dataset
110
+
111
+ ds = load_dataset("orrzohar/EMID-Emotion-Matching")
112
+ train_ds = ds["train"]
113
+ test_ds = ds["test"]
114
+
115
+ ex = train_ds[0]
116
+ audio = ex["audio"] # dict with "array" + "sampling_rate"
117
+ sr = ex["sampling_rate"] # int
118
+ image = ex["image"] # PIL.Image.Image
119
+ same = ex["same"] # bool
120
+ emotion = ex["emotion"] # str
121
+ question = ex["question"] # str
122
+ answer = ex["answer"] # str
123
+ ```
124
+
125
+ In the Qwen-Omni demos, we typically:
126
+
127
+ - Use `question` as the user prompt,
128
+ - Provide `audio` and `image` as multimodal inputs, and
129
+ - Supervise the model with the provided `answer` (or regenerate your own phrasing from `same`/`emotion`).
130
+
131
+ ## License
132
+
133
+ This derived dataset **inherits the license** from EMID:
134
+
135
+ - **CC BY-NC-SA 4.0** (Attribution–NonCommercial–ShareAlike 4.0 International)
136
+
137
+ You **must**:
138
+
139
+ - Use the data only for **non-commercial** purposes.
140
+ - Provide appropriate **attribution** to the EMID authors and this derived dataset.
141
+ - Distribute derivative works under the **same license**.
142
+
143
+ Please refer to the full license text for details:
144
+ <https://creativecommons.org/licenses/by-nc-sa/4.0/>
145
+
146
+ If you use this dataset in academic work, please cite the EMID paper and, if appropriate,
147
+ this derived dataset as well.
data/test-00000-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3214a77080aab8c905cc231fdb836bdd3858bbc30c16cd25c5ea8cec50e5ec91
3
+ size 481318721
data/test-00001-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f91f0583c76a7479dd2298298e9c1d244ff65729b92aaf80a8513452353397e9
3
+ size 480439052
data/test-00002-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ab2dfc0fd36b5a51016581b05034afb47c0771196d95f777ccf6001660e37b0
3
+ size 479094630
data/test-00003-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b9842b845764db9d52edc9b37cb5b97fdfe9b774baecff9a7aab5585c7b1924
3
+ size 481201757
data/test-00004-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de217d152ec96bba145e9d57b1643075335a9e46dc408511b28c8b4af257dfaf
3
+ size 483062610
data/test-00005-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e723b0fe6648f17dd8da0aa5e574bfb6622150061a8e36b1090e56239a587b4
3
+ size 481053977
data/train-00000-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:034281dd98b9e2fa06fc18c77ec4fee0540b8fdf933842c5af3c25b9e40a05cd
3
+ size 481962350
data/train-00001-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d2fc6290f2f7d6384447a941e6750afe64ef1b09e9381c0e4e006c85c4433be
3
+ size 476584295
data/train-00002-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60f64883b4fca0278d5787a6b2bc7109b7740c286b66f6511b61dd8f80025fd7
3
+ size 482126028
data/train-00003-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94038d5468f17e34906d205d70d845bb09e2d46996dea270d4728096387cc1f7
3
+ size 480842201
data/train-00004-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2317443fc917f1bf790e57969ad4a793a72c60c5730b5304c93d33c826494139
3
+ size 480841901
data/train-00005-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4941199f77de62d086aa965be96621d0012c533c1108c15754b40ccfc5c79be2
3
+ size 480555786
data/train-00006-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d87f7ebd43877bc6c112a0c787fb8183602eec2c289abe37fb9df9cd3a069bcf
3
+ size 484025735
data/train-00007-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa645a38c3547cdb0962c4b4bbf095b37ec42c3827e131a59a4c36e9988674be
3
+ size 479932160
data/train-00008-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc51d26f900c39be688a26bba3554ce31d3036619c21db66317c13605b45c1f7
3
+ size 480652781
data/train-00009-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0301264322a4fd8b608ff0398dacef4630a49df082b42c2c35366ed555da4c2a
3
+ size 482040522
data/train-00010-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7c0ebefe2e8a9631d5b0eb539c91253d3dc2649724f4a85d8e44c841a5767fc
3
+ size 480533063
data/train-00011-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d85044762873eb7938b57dbeee378ba50807323023101d9382680112a33efa2
3
+ size 481226650
data/train-00012-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31659ca50d34920a4692583dee47b7153c9059fa68d7c438790e61ff43e981ec
3
+ size 483026063
data/train-00013-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96600823f84023f0de9ef4bb0837451713ed7d61eb427277c0b1999295b2cc8f
3
+ size 482428998
data/train-00014-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f8b5d9122d29460a4cc0008bfec5723465f5eb17ef3659b56d2b37f5c8c27f5
3
+ size 481359008
data/train-00015-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d544b92effe8e7218b130904326ff184cb2bbcb0e563dfae98d8e593dd1e241f
3
+ size 476961091
data/train-00016-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ad015512f5be3e19cf58e814be91de6df59fa520610918206423bec50630c02
3
+ size 476998992
data/train-00017-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa4609a21ee486a66cf61e9825644ab5e3bbe66ff5185487d85b7787e02f07c2
3
+ size 480633782
data/train-00018-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76b293c70257622ffb3f3c0d6c06ebe8785a910221624818b8064bd34a0e0233
3
+ size 480742078
data/train-00019-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a999b061bd955b07310f9d43e2a8ec482adee1d3e6f9f45573439e6f1e33daaf
3
+ size 482667027
data/train-00020-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62bcd1b3421368312fb036c517f812690b87e9ae9b391ead11b57fc289878e0a
3
+ size 481501140
data/train-00021-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13bbfb1b5375538dbf99024e1bfb9bb3523504442b2367854d43161ca389672f
3
+ size 481523537
data/train-00022-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f97c47147d5c20919338c3cecab863008f0596fe4d33d24f6dc67ae0c87d068
3
+ size 479278310
data/train-00023-of-00024.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:495672d6eb03a00ffef266fcdd068ee1e761cf3665312c69fc7d4751f83efe20
3
+ size 481687516