File size: 6,115 Bytes
7e00df7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fed35c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80c951a
 
 
 
 
 
e3a7cc9
 
80c951a
 
 
9739dfe
71b37f6
80c951a
 
 
 
 
 
 
 
3d5a1ab
8c8f094
3d5a1ab
 
80c951a
3d5a1ab
71b37f6
3d5a1ab
 
71b37f6
 
 
 
 
9739dfe
71b37f6
 
 
 
 
 
95ed237
 
 
 
 
 
 
 
 
 
 
0f1ede2
 
 
95ed237
f120120
a25e5e8
 
 
 
 
 
 
 
7afbf72
a25e5e8
71b37f6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e3a7cc9
 
 
 
b7be1a2
e9e926a
146f8c6
e3a7cc9
 
1b6155b
 
 
 
 
 
 
e3a7cc9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
- audio-classification
language:
- en
tags:
- shouts
- emotional_speech
- distance_speech
- smartphone_recordings
- nonsense_phrases
- non-native_accents
- regional_accents
pretty_name: B(asic) E(motion) R(andom phrase) S(hou)t(s)
size_categories:
- 1K<n<10K
dataset_info:
  features:
  - name: audio
    dtype:
      audio:
        sampling_rate: 48000
  - name: user_id
    dtype: string
  - name: age
    dtype: string
  - name: current_language
    dtype: string
  - name: first_language
    dtype: string
  - name: gender
    dtype: string
  - name: phone_model
    dtype: string
  - name: audio_id
    dtype: string
  - name: affect
    dtype: string
  - name: last_modified
    dtype: string
  - name: phone_position
    dtype: string
  - name: script
    dtype: string
  - name: shout_level
    dtype: string
  splits:
  - name: train
    num_bytes: 956572664.583
    num_examples: 3503
  - name: test
    num_bytes: 140282965.0
    num_examples: 532
  - name: validation
    num_bytes: 143434236.0
    num_examples: 488
  download_size: 1055596177
  dataset_size: 1240289865.583
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
  - split: validation
    path: data/validation-*
---


# BERSt Dataset

We release the BERSt Dataset for various speech recognition tasks including Automatic Speech Recognition (ASR) and Speech Emotion Recogniton (SER)
 
[Read the paper here](https://arxiv.org/abs/2505.00059)

## Overview

* 4526 single phrase recordings (~3.75h)
* 98 professional actors
* 19 phone positions
* 7 emotion classes
* 3 vocal intensity levels
* varied regional and non-native English accents
* nonsense phrases covering all English Phonemes

## Data collection

The BERSt dataset represents data collected in home environments using various smartphone microphones (phone model available as metadata)
The recordings come from professional actors around the globe and represent varying regional accents in English: UK, Canada, USA (multi-state), Australia, including a subset of the data that is non-native English speakers including: French, Russian, Hindi etc.
The data includes 13 non-sense phrases for use cases robust to linguistic context and high surprisal.
Participants were prompted to speak, raise their voice and shout each phrase while moving their phone to various distances and locations in their home, as well as with various obstructions to the microphone, e.g. in a backpack.

Baseline results of various state-of-the-art methods for ASR and SER show that this dataset remains a challenging task, and we encourage researchers to use this data to fine-tune and benchmark their models in these difficult conditions representing possible real-world situations.

Affect annotations are those provided to the actors; they have not been validated through perception.
The speech annotations, however, have been checked and adjusted for mistakes in the speech.

## Data splits and organisation

For each phone position and phrase, the actors provided a single recording for the three vocal intensity levels, these raw audio files are available

Meta-data in csv format corresponds to the files split per utterance with noise and silence before and after speech removed, found inside `clean_clips` for each data splits

We provide a test, train and validation split

There is no speaker cross-over between splits, the train and validation sets each contain 10 speakers not seen in the training set

## Baseline Results

Automatic speech recognition: word error rate, character error rate, phone error rate

| Model                             | WER ↓   | CER ↓   | PER ↓   |
|----------------------------------|---------|---------|---------|
| Whisper - medium.en              | **17.27%** | 7.81%   | 7.80%   |
| Whisper - turbo                  | 17.93%  | **7.28%** | **7.30%** |
| NeMo Quartznet                   | 39.49%  | 15.24%  | 15.77%  |
| NeMo Fastconformer Transducer   | 24.96%  | 10.72%  | 10.13%  |
| Wav2Vec2-Base-960h               | 49.65%  | 18.94%  | 19.90%  |

![emotion_wer_plot](./figures/png2pdf-1-1-1.png)
![distance_wer_plot](./figures/png2pdf-3-1-1.png)
![shout_wer_plot](./figures/png2pdf-4-1-1.png)

Speech emotion recognition: Weighted and Unweighted Accuracy

| Model                  | UA ↑   | WA ↑   |
|------------------------|--------|--------|
| SpeechBrain Wav2Vec2   | 20.7%  | 20.8%  |
| DAWN-hidden-SVM        | **32.1%** | **32.2%** |
| Wav2Small-VAD-SVM*     | 23.3%  | 22.3%  |

*Teacher model

SVM indicates an SVM on the hidden layers or VAD output, see paper for details

## Metadata Details

* actor count
  * 98
* Gender counts
  * Woman: 61
  * Man: 34
  * Non-Binary: 1
  * Prefer not to disclose 2
* Current daily language counts
  * English: 95
  * Norwegian: 1
  * Russian: 1
  * French: 1
* First language counts
  * English: 75
  * Non English: 23
    * Spanish: 6
    * French: 3
    * Portuguese: 3
    * Chinese: 2
    * Norwegian: 1
    * Mandarin: 1
    * Tagalog: 1
    * Italian: 1
    * Hungarian: 1
    * Russian: 1
    * Hindi: 1
    * Swahili: 1
    * Croatian: 1
Pre-split Data counts
* Emotion counts
  * fear: 236
  * neutral: 234
  * disgust: 232
  * joy: 224
  * anger: 223
  * surprise: 210
  * sadness: 201
* Distance counts:
  * Near body: 627
  * 1-2m away: 324
  * Other side of room: 316
  * Outside of room: 293


Cite as:  

```bibtex
@article{tuttösí2025berstingscreamsbenchmarkdistanced, 
      title={BERSting at the Screams: A Benchmark for Distanced, Emotional and Shouted Speech Recognition}, 
      author={Paige Tuttösí and Mantaj Dhillon and Luna Sang and Shane Eastwood and Poorvi Bhatia and Quang Minh Dinh and Avni Kapoor and Yewon Jin and Angelica Lim},
      journal = {Computer Speech & Language},
      volume = {95},
      pages = {101815},
      year = {2026},
      issn = {0885-2308},
      doi = {https://doi.org/10.1016/j.csl.2025.101815},
      url = {https://www.sciencedirect.com/science/article/pii/S0885230825000403},
}