WESR-Bench / README.md
yfish's picture
Update README.md
b36cdd5 verified
metadata
license: mit
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: sentence
      dtype: string
    - name: duration
      dtype: float64
    - name: language
      dtype: string
  splits:
    - name: en
      num_bytes: 87640124
      num_examples: 438
    - name: zh
      num_bytes: 131476237
      num_examples: 489
  download_size: 218759346
  dataset_size: 219116361
configs:
  - config_name: default
    data_files:
      - split: en
        path: data/en-*
      - split: zh
        path: data/zh-*
task_categories:
  - automatic-speech-recognition
  - audio-classification
language:
  - en
  - zh
tags:
  - audio
  - asr
size_categories:
  - n<1K

WESR-Bench

WESR-Bench is an expert-annotated natural speech dataset with word-level non-verbal vocal events, featuring both discrete events (standalone, denoted as [tag]) and continuous events (mixed with speech, denoted as <tag>...</tag>).

Supported Tags

Discrete events (15):

  • inhale, cough, laughs, laughing, crowd_laughter, chuckle, shout, sobbing, cry, giggle,exhale, sigh, clear_throat, roar, scream, breathing

Continuous events (6):

  • crying, laughing, panting, shouting, singing, whispering

Evaluation

See code and guidelines for evaluation in Github.

Citation

If you find WESR-Bench helpful in your research, please cite our paper:

@misc{yang2026wesrscalingevaluatingwordlevel,
      title={WESR: Scaling and Evaluating Word-level Event-Speech Recognition}, 
      author={Chenchen Yang and Kexin Huang and Liwei Fan and Qian Tu and Botian Jiang and Dong Zhang and Linqi Yin and Shimin Li and Zhaoye Fei and Qinyuan Cheng and Xipeng Qiu},
      year={2026},
      eprint={2601.04508},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2601.04508}, 
}