File size: 4,660 Bytes
019667e 625d2ae 019667e 625d2ae 019667e 625d2ae | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 | ---
license: cc-by-4.0
language:
- de
- en
- es
- cs
- fr
- hu
- it
- nl
- pt
- ru
- sq
- sv
tags:
- speech prompts
- text prompts
- instruction following
- benchmark
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: text_prompt
dtype: string
- name: audio_prompt_female_1
dtype: audio
- name: audio_prompt_female_2
dtype: audio
- name: audio_prompt_male_1
dtype: audio
- name: audio_prompt_male_2
dtype: audio
- name: language
dtype: string
- name: task
dtype: string
- name: prompt_type
dtype: string
splits:
- name: test
num_bytes: 2704378267.6
num_examples: 1320
download_size: 1772318018
dataset_size: 2704378267.6
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Do What I Say (DOWIS): A Spoken Prompt Dataset for Instruction-Following
<span style="background-color:#fee2e2; color:#b91c1c; padding:2px 6px; border-radius:4px; font-size:0.85em; font-weight:600;">NEW</span> DOWIS now also contains spoken and written prompts in Albanian (sq), and for the tasks LIPREAD and SLU!
> **TL;DR** — DOWIS is a multilingual dataset of human-recorded spoken and written instruction prompts, designed to enable realistic evaluation of Speech Large Language Models across 11 tasks and 12 languages.
---
## Dataset Summary
Most Speech LLM benchmarks use text-based prompts, which does not reflect how users actually interact with these models in the real world. DOWIS fills this gap by providing human-recorded spoken prompts, paired with their written equivalents, across a wide range of tasks, languages, and prompt styles. Each prompt can be directly paired with any existing speech benchmark to evaluate how well Speech LLMs follow spoken instructions.
The dataset contains **1,320 rows**, with up to 4 audio recordings per row (2 female, 2 male speakers where available), covering:
- **12 languages**: cs, de, en, es, fr, hu, it, nl, pt, ru, sq, sv
- **11 tasks**: ACHAP, ASR, MT, S2ST, SQA, SSUM, ST, TSUM, TTS, LIPREAD, SLU
- **5 prompt styles**: basic, formal, informal, detailed, short
- **10 prompt variants** per task-language pair
Details can be found in the corresponding paper on [arXiv](https://arxiv.org/abs/2603.09881).
Code for benchmarking Speech LLMs with different task benchmarks coupled with DOWIS can be found on [GitHub](https://github.com/MaikeZuefle/DOWIS/tree/main).
---
## Tasks
| Task Code | Description |
|-----------|-------------|
| ACHAP | Audio Chaptering |
| ASR | Automatic Speech Recognition |
| MT | Machine Translation |
| S2ST | Speech-to-Speech Translation |
| SQA | Spoken Question Answering |
| SSUM | Speech Summarization |
| ST | Speech Translation |
| TSUM | Text Summarization |
| TTS | Text-to-Speech |
| LIPREAD | Lip-Reading |
| SLU | Spoken Language Understanding |
## Prompt Styles
| Style | Description |
|-------|-------------|
| `basic` | Natural, everyday phrasing a researcher would use |
| `formal` | Professional, polished language |
| `informal` | Conversational and casual |
| `detailed` | Explicit and precise instructions on how to perform the task |
| `short` | Concise as possible while remaining unambiguous |
---
## Dataset Fields
| Field | Type | Description |
|-------|------|-------------|
| `text_prompt` | `string` | Written version of the instruction prompt |
| `audio_prompt_female_1` | `Audio` | Human-recorded female speaker (speaker 1), `null` if unavailable |
| `audio_prompt_female_2` | `Audio` | Human-recorded female speaker (speaker 2), `null` if unavailable |
| `audio_prompt_male_1` | `Audio` | Human-recorded male speaker (speaker 1), `null` if unavailable |
| `audio_prompt_male_2` | `Audio` | Human-recorded male speaker (speaker 2), `null` if unavailable |
| `language` | `string` | ISO 639-1 language code (e.g. `en`, `de`) |
| `task` | `string` | Task code the prompt is designed for (e.g. `asr`, `mt`) |
| `prompt_type` | `string` | Prompt style: `basic`, `formal`, `informal`, `detailed`, or `short` |
---
## Citation
If you use this work, please cite:
```bibtex
@misc{züfle2026isayspokenprompt,
title={Do What I Say: A Spoken Prompt Dataset for Instruction-Following},
author={Maike Züfle and
Sara Papi and
Fabian Retkowski and
Szymon Mazurek and
Marek Kasztelnik and
Alexander Waibel and
Luisa Bentivogli and
Jan Niehues},
year={2026},
eprint={2603.09881},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2603.09881}}
```
---
Dataset Contact: maike.zuefle@kit.edu |