| --- |
| dataset_name: EMID-Emotion-Matching |
| annotations_creators: |
| - expert-generated |
| language: |
| - en |
| license: cc-by-nc-sa-4.0 |
| pretty_name: EMID Music ↔ Image Emotion Matching Pairs |
| tags: |
| - audio |
| - music |
| - image |
| - multimodal |
| - emotion |
| - contrastive-learning |
| task_categories: |
| - audio-classification |
| - image-classification |
| - visual-question-answering |
| --- |
| |
| # EMID-Emotion-Matching |
|
|
| `orrzohar/EMID-Emotion-Matching` is a derived dataset built on top of |
| the **Emotionally paired Music and Image Dataset (EMID)** from ECNU (`ecnu-aigc/EMID`). |
| It is designed for *music ↔ image emotion matching* with Qwen-Omni–style models. |
|
|
| Each example contains: |
|
|
| - `audio`: mono waveform stored as `datasets.Audio` (HF Hub preview can play it) |
| - `sampling_rate`: sampling rate used when decoding (typically 16 kHz) |
| - `image`: a single image (`datasets.Image`) |
| - `same`: `bool`, whether the audio and image are labeled with the **same** emotion |
| - `emotion`: normalized image emotion tag (e.g. `amusement`, `excitement`) for positive pairs; empty string for negatives |
| - `question`: natural-language question used to prompt the model (several templates are mixed) |
| - `answer`: canonical supervision text (`yes - {emotion}` for positives, `no` for negatives) |
|
|
| | column | type | description | |
| | -------------- | ------------------------------- | ----------- | |
| | `audio` | `datasets.Audio (16k mono)` | decoded waveform; HF UI can play it | |
| | `sampling_rate`| `int32` | explicit sample rate mirrored beside the `Audio` column | |
| | `image` | `datasets.Image` | PIL.Image-compatible object | |
| | `same` | `bool` | `True` if the pair is emotion-aligned | |
| | `emotion` | `string` | normalized emotion label for positives, `""` otherwise | |
| | `question` | `string` | user prompt template | |
| | `answer` | `string` | canonical supervision text (`yes - {emotion}` / `no`) | |
|
|
| The original EMID row has one music clip and up to **three** tagged images |
| (`Image1`, `Image2`, `Image3`). For each `(audio, image)` pair we create: |
|
|
| - **1 positive example**: the audio and its own tagged image (`same = True`, `emotion = image_tag`) |
| - **NEGATIVES_PER_POSITIVE = 1 negative example**: the same audio paired with an image drawn |
| from a *different* emotion tag (`same = False`, `emotion = ""`) |
|
|
| With `MAX_SOURCE_ROWS = 4000`, this yields ~24,000 examples (positives + negatives), |
| which we then split into: |
|
|
| - `train`: 19,200 examples |
| - `test`: 4,800 examples |
|
|
| ## Source Data (EMID) |
|
|
| The base EMID dataset is described in: |
|
|
| - **Emotionally paired Music and Image Dataset (EMID)** |
| *Y. Guo, J. Li, et al.* |
| arXiv:2308.07622 — "Emotionally paired Music and Image Dataset (EMID)" |
| <https://arxiv.org/abs/2308.07622> |
|
|
| EMID contains 10,738 unique music clips, each paired with three images in the same |
| emotional category, plus rich annotations: |
|
|
| - `Audio_Filename`: unique filename of the music clip |
| - `genre`: letter A–M, one of 13 emotional categories |
| - `feeling`: distribution of free-form feelings reported by listeners (% per feeling) |
| - `emotion`: ratings on 11 emotional dimensions (1–9) |
| - `Image{1,2,3}_filename`: matched image filenames |
| - `Image{1,2,3}_tag`: image emotion category (e.g. `amusement`, `excitement`) |
| - `Image{1,2,3}_text`: GIT-generated captions |
| - `is_original_clip`: whether this is an original or expanded clip |
|
|
| For more details, see the EMID README and the paper above. |
|
|
| ## How This Derived Dataset Was Built |
|
|
| The script `prepare_emid_pairs.py` performs the following steps offline: |
|
|
| 1. Load `ecnu-aigc/EMID` (train split) and decode: |
| - `Audio_Filename` with `Audio(decode=True)` |
| - `Image{1,2,3}_filename` with `datasets.Image(decode=True)` |
| 2. Optionally cap the number of source rows with `MAX_SOURCE_ROWS` (default 4000). |
| 3. Build an **image pool** keyed by normalized emotion tags. |
| 4. For each EMID row and each available image (up to 3 per row): |
| - Create a positive pair `(audio, image, same=True, emotion=image_tag)`. |
| - Sample `NEGATIVES_PER_POSITIVE` images from *different* emotion tags to form negatives. |
| 5. Normalize the emotion strings (lowercase, replace spaces and punctuation with `_`). |
| 6. Draw a random question from a small set of Qwen-style templates and attach it as `question`. |
| 7. Store the mono waveform as `datasets.Audio` and the image as `datasets.Image` so |
| that downstream scripts can call `datasets.load_dataset` without extra decoding logic. |
| 8. Split into train/test with `TRAIN_FRACTION = 0.8`. |
|
|
| This yields a simple, flat structure that is convenient for SFT / contrastive training |
| with Qwen2.5-Omni (or other multimodal LMs), without re-doing negative sampling or |
| audio/image decoding inside notebooks. |
|
|
| ## Suggested Usage |
|
|
| ```python |
| from datasets import load_dataset |
| |
| ds = load_dataset("orrzohar/EMID-Emotion-Matching") |
| train_ds = ds["train"] |
| test_ds = ds["test"] |
| |
| ex = train_ds[0] |
| audio = ex["audio"] # dict with "array" + "sampling_rate" |
| sr = ex["sampling_rate"] # int |
| image = ex["image"] # PIL.Image.Image |
| same = ex["same"] # bool |
| emotion = ex["emotion"] # str |
| question = ex["question"] # str |
| answer = ex["answer"] # str |
| ``` |
|
|
| In the Qwen-Omni demos, we typically: |
|
|
| - Use `question` as the user prompt, |
| - Provide `audio` and `image` as multimodal inputs, and |
| - Supervise the model with the provided `answer` (or regenerate your own phrasing from `same`/`emotion`). |
|
|
| ## License |
|
|
| This derived dataset **inherits the license** from EMID: |
|
|
| - **CC BY-NC-SA 4.0** (Attribution–NonCommercial–ShareAlike 4.0 International) |
|
|
| You **must**: |
|
|
| - Use the data only for **non-commercial** purposes. |
| - Provide appropriate **attribution** to the EMID authors and this derived dataset. |
| - Distribute derivative works under the **same license**. |
|
|
| Please refer to the full license text for details: |
| <https://creativecommons.org/licenses/by-nc-sa/4.0/> |
|
|
| If you use this dataset in academic work, please cite the EMID paper and, if appropriate, |
| this derived dataset as well. |
|
|