File size: 1,852 Bytes
d85505a
 
 
 
 
 
 
 
 
 
 
 
 
 
ea1aed2
d85505a
 
ea1aed2
 
 
 
d85505a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44a57fc
 
 
 
 
 
 
 
 
 
 
 
b36cdd5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: mit
dataset_info:
  features:
  - name: audio
    dtype: audio
  - name: sentence
    dtype: string
  - name: duration
    dtype: float64
  - name: language
    dtype: string
  splits:
  - name: en
    num_bytes: 87640124.0
    num_examples: 438
  - name: zh
    num_bytes: 131476237.0
    num_examples: 489
  download_size: 218759346
  dataset_size: 219116361.0
configs:
- config_name: default
  data_files:
  - split: en
    path: data/en-*
  - split: zh
    path: data/zh-*
task_categories:
- automatic-speech-recognition
- audio-classification
language:
- en
- zh
tags:
- audio
- asr
size_categories:
- n<1K
---

# WESR-Bench

WESR-Bench is an expert-annotated natural speech dataset with word-level non-verbal vocal events, featuring both discrete events (standalone, denoted as \[tag\]) and continuous events (mixed with speech, denoted as \<tag\>...\</tag\>). 

## Supported Tags

**Discrete events (15):**
- `inhale`, `cough`, `laughs`, `laughing`, `crowd_laughter`, `chuckle`, `shout`, `sobbing`, `cry`, `giggle`,`exhale`, `sigh`, `clear_throat`, `roar`, `scream`, `breathing`

**Continuous events (6):**
- `crying`, `laughing`, `panting`, `shouting`, `singing`, `whispering`

## Evaluation

See code and guidelines for evaluation in [Github](https://github.com/Cr-Fish/WESR).

## Citation

If you find WESR-Bench helpful in your research, please cite our paper:

```
@misc{yang2026wesrscalingevaluatingwordlevel,
      title={WESR: Scaling and Evaluating Word-level Event-Speech Recognition}, 
      author={Chenchen Yang and Kexin Huang and Liwei Fan and Qian Tu and Botian Jiang and Dong Zhang and Linqi Yin and Shimin Li and Zhaoye Fei and Qinyuan Cheng and Xipeng Qiu},
      year={2026},
      eprint={2601.04508},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2601.04508}, 
}
```