| | --- |
| | license: cc-by-nc-4.0 |
| | task_categories: |
| | - text-generation |
| | - image-to-text |
| | language: |
| | - en |
| | multilinguality: |
| | - monolingual |
| | pretty_name: IMAD |
| | size_categories: |
| | - 1K<n<10K |
| | tags: |
| | - multi-modal |
| | - dialogue |
| | --- |
| | |
| | ## Dataset Description |
| |
|
| | - **Repository:** [Link to repo](https://github.com/VityaVitalich/IMAD) |
| | - **Paper:** [IMage Augmented multi-modal Dialogue: IMAD](https://arxiv.org/abs/2305.10512v1) |
| | - **Point of Contact:** [Contacts Section](https://github.com/VityaVitalich/IMAD#contacts) |
| |
|
| | ### Dataset Summary |
| |
|
| | This dataset contains data from the paper [IMage Augmented multi-modal Dialogue: IMAD](https://arxiv.org/abs/2305.10512v1). |
| | The main feature of this dataset is the novelty of the task. It has been generated specifically for the purpose of image interpretation in a dialogue context. |
| | Some of the dialogue utterances have been replaced with images, allowing a generative model to be trained to restore the initial utterance. |
| | The dialogues are sourced from multiple dialogue datasets (DailyDialog, Commonsense, PersonaChat, MuTual, Empathetic Dialogues, Dream) and have been filtered using a technique described in the paper. |
| | A significant portion of the data has been labeled by assessors, resulting in a high inter-reliability score. The combination of these methods has led to a well-filtered dataset and consequently a high BLEU score. |
| | We hope that this dataset will be beneficial for the development of multi-modal deep learning. |
| |
|
| | ### Data Fields |
| |
|
| | Dataset contains 5 fields |
| |
|
| | - `image_id`: `string` that contains id of image in the full Unsplash Dataset |
| | - `source_data`: `string` that contains the name of source dataset |
| | - `utter`: `string` that contains utterance that was replaced in this dialogue with an image |
| | - `context`: `list` of `string` that contains sequence of utterances in the dialogue before the replaced utterance |
| | - `image_like`: `int` that shows if the data was collected with assessors or via filtering technique |
| |
|
| |
|
| | ### Licensing Information |
| |
|
| | Textual part of IMAD is licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). Full Dataset with images could be requested directly contacting authors |
| | or could be obtained with matching images_id with Unsplash full dataset. |
| | |
| | ### Contacts |
| | |
| | Feel free to reach out to us at [vvmoskvoretskiy@yandex.ru] for inquiries, collaboration suggestions, or data requests related to our work. |
| | |
| | ### Citation Information |
| | |
| | |
| | To cite this article please use this BibTex reference |
| | |
| | ```bibtex |
| | @misc{viktor2023imad, |
| | title={IMAD: IMage-Augmented multi-modal Dialogue}, |
| | author={Moskvoretskii Viktor and Frolov Anton and Kuznetsov Denis}, |
| | year={2023}, |
| | eprint={2305.10512}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CL} |
| | } |
| | ``` |
| | |
| | Or via MLA Citation |
| | |
| | ``` |
| | Viktor, Moskvoretskii et al. “IMAD: IMage-Augmented multi-modal Dialogue.” (2023). |
| | ``` |