Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,26 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
dataset_info:
|
| 3 |
features:
|
| 4 |
- name: text_prompt
|
|
@@ -29,6 +51,94 @@ configs:
|
|
| 29 |
- split: test
|
| 30 |
path: data/test-*
|
| 31 |
---
|
| 32 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
-
|
|
|
|
| 1 |
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
language:
|
| 4 |
+
- de
|
| 5 |
+
- en
|
| 6 |
+
- es
|
| 7 |
+
- cs
|
| 8 |
+
- fr
|
| 9 |
+
- hu
|
| 10 |
+
- it
|
| 11 |
+
- nl
|
| 12 |
+
- pt
|
| 13 |
+
- ru
|
| 14 |
+
- sq
|
| 15 |
+
- sv
|
| 16 |
+
tags:
|
| 17 |
+
- speech prompts
|
| 18 |
+
- text prompts
|
| 19 |
+
- instruction following
|
| 20 |
+
- benchmark
|
| 21 |
+
size_categories:
|
| 22 |
+
- 1K<n<10K
|
| 23 |
+
|
| 24 |
dataset_info:
|
| 25 |
features:
|
| 26 |
- name: text_prompt
|
|
|
|
| 51 |
- split: test
|
| 52 |
path: data/test-*
|
| 53 |
---
|
| 54 |
+
# Do What I Say (DOWIS): A Spoken Prompt Dataset for Instruction-Following
|
| 55 |
+
|
| 56 |
+
<span style="background-color:#fee2e2; color:#b91c1c; padding:2px 6px; border-radius:4px; font-size:0.85em; font-weight:600;">NEW</span> DOWIS now also contains spoken and written prompts in Albanian (sq), and for the tasks LIPREAD and SLU!
|
| 57 |
+
|
| 58 |
+
> **TL;DR** — DOWIS is a multilingual dataset of human-recorded spoken and written instruction prompts, designed to enable realistic evaluation of Speech Large Language Models across 11 tasks and 12 languages.
|
| 59 |
+
|
| 60 |
+
---
|
| 61 |
+
|
| 62 |
+
## Dataset Summary
|
| 63 |
+
|
| 64 |
+
Most Speech LLM benchmarks use text-based prompts, which does not reflect how users actually interact with these models in the real world. DOWIS fills this gap by providing human-recorded spoken prompts, paired with their written equivalents, across a wide range of tasks, languages, and prompt styles. Each prompt can be directly paired with any existing speech benchmark to evaluate how well Speech LLMs follow spoken instructions.
|
| 65 |
+
|
| 66 |
+
The dataset contains **1,320 rows**, with up to 4 audio recordings per row (2 female, 2 male speakers where available), covering:
|
| 67 |
+
- **12 languages**: cs, de, en, es, fr, hu, it, nl, pt, ru, sq, sv
|
| 68 |
+
- **11 tasks**: ACHAP, ASR, MT, S2ST, SQA, SSUM, ST, TSUM, TTS, LIPREAD, SLU
|
| 69 |
+
- **5 prompt styles**: basic, formal, informal, detailed, short
|
| 70 |
+
- **10 prompt variants** per task-language pair
|
| 71 |
+
|
| 72 |
+
Details can be found in the corresponding paper on [arXiv](https://arxiv.org/abs/2603.09881).
|
| 73 |
+
|
| 74 |
+
Code for benchmarking Speech LLMs with different task benchmarks coupled with DOWIS can be found on [GitHub](https://github.com/MaikeZuefle/DOWIS/tree/main).
|
| 75 |
+
|
| 76 |
+
---
|
| 77 |
+
|
| 78 |
+
## Tasks
|
| 79 |
+
|
| 80 |
+
| Task Code | Description |
|
| 81 |
+
|-----------|-------------|
|
| 82 |
+
| ACHAP | Audio Chaptering |
|
| 83 |
+
| ASR | Automatic Speech Recognition |
|
| 84 |
+
| MT | Machine Translation |
|
| 85 |
+
| S2ST | Speech-to-Speech Translation |
|
| 86 |
+
| SQA | Spoken Question Answering |
|
| 87 |
+
| SSUM | Speech Summarization |
|
| 88 |
+
| ST | Speech Translation |
|
| 89 |
+
| TSUM | Text Summarization |
|
| 90 |
+
| TTS | Text-to-Speech |
|
| 91 |
+
| LIPREAD | Lip-Reading |
|
| 92 |
+
| SLU | Spoken Language Understanding |
|
| 93 |
+
|
| 94 |
+
## Prompt Styles
|
| 95 |
+
|
| 96 |
+
| Style | Description |
|
| 97 |
+
|-------|-------------|
|
| 98 |
+
| `basic` | Natural, everyday phrasing a researcher would use |
|
| 99 |
+
| `formal` | Professional, polished language |
|
| 100 |
+
| `informal` | Conversational and casual |
|
| 101 |
+
| `detailed` | Explicit and precise instructions on how to perform the task |
|
| 102 |
+
| `short` | Concise as possible while remaining unambiguous |
|
| 103 |
+
|
| 104 |
+
---
|
| 105 |
+
|
| 106 |
+
## Dataset Fields
|
| 107 |
+
|
| 108 |
+
| Field | Type | Description |
|
| 109 |
+
|-------|------|-------------|
|
| 110 |
+
| `text_prompt` | `string` | Written version of the instruction prompt |
|
| 111 |
+
| `audio_prompt_female_1` | `Audio` | Human-recorded female speaker (speaker 1), `null` if unavailable |
|
| 112 |
+
| `audio_prompt_female_2` | `Audio` | Human-recorded female speaker (speaker 2), `null` if unavailable |
|
| 113 |
+
| `audio_prompt_male_1` | `Audio` | Human-recorded male speaker (speaker 1), `null` if unavailable |
|
| 114 |
+
| `audio_prompt_male_2` | `Audio` | Human-recorded male speaker (speaker 2), `null` if unavailable |
|
| 115 |
+
| `language` | `string` | ISO 639-1 language code (e.g. `en`, `de`) |
|
| 116 |
+
| `task` | `string` | Task code the prompt is designed for (e.g. `asr`, `mt`) |
|
| 117 |
+
| `prompt_type` | `string` | Prompt style: `basic`, `formal`, `informal`, `detailed`, or `short` |
|
| 118 |
+
|
| 119 |
+
---
|
| 120 |
+
|
| 121 |
+
|
| 122 |
+
## Citation
|
| 123 |
+
If you use this work, please cite:
|
| 124 |
+
```bibtex
|
| 125 |
+
@misc{züfle2026isayspokenprompt,
|
| 126 |
+
title={Do What I Say: A Spoken Prompt Dataset for Instruction-Following},
|
| 127 |
+
author={Maike Züfle and
|
| 128 |
+
Sara Papi and
|
| 129 |
+
Fabian Retkowski and
|
| 130 |
+
Szymon Mazurek and
|
| 131 |
+
Marek Kasztelnik and
|
| 132 |
+
Alexander Waibel and
|
| 133 |
+
Luisa Bentivogli and
|
| 134 |
+
Jan Niehues},
|
| 135 |
+
year={2026},
|
| 136 |
+
eprint={2603.09881},
|
| 137 |
+
archivePrefix={arXiv},
|
| 138 |
+
primaryClass={cs.CL},
|
| 139 |
+
url={https://arxiv.org/abs/2603.09881}}
|
| 140 |
+
```
|
| 141 |
+
|
| 142 |
+
---
|
| 143 |
|
| 144 |
+
Dataset Contact: maike.zuefle@kit.edu
|