File size: 1,869 Bytes
ea50d8a e293915 ea50d8a 661c4ca |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
language:
- en
- zh
task_categories:
- text-classification
- token-classification
license: cc-by-4.0
---
# TeleVRSLUBench
## Dataset Description
**TeleVRSLUBench** is a spoken language understanding (SLU) benchmark that incorporates **visual scene information and explicit reasoning processes** for **joint intent detection and slot filling**.
The dataset is proposed in the paper:
> *Introducing Visual Scenes and Reasoning: A More Realistic Benchmark for Spoken Language Understanding*
It is, to the best of our knowledge, the **first SLU benchmark** that integrates scene-level visual context and reasoning information to support more realistic spoken language understanding.
The official code repository is available at:
https://github.com/Tele-AI/TeleVRSLUBench
---
## Citation
If you find **TeleVRSLUBench** helpful to your research, please consider citing the following papers. We sincerely thank the original authors for their valuable contributions. If VRSLU is useful to your work, please also consider citing the original **ProSLU** paper.
```bibtex
@article{wu2025introducing,
title = {Introducing Visual Scenes and Reasoning: A More Realistic Benchmark for Spoken Language Understanding},
author = {Wu, Di and Jiang, Liting and Fang, Ruiyu and Xie, Hongyan and Su, Haoxiang and Huang, Hao and He, Zhongjiang and Song, Shuangyong and Li, Xuelong and others},
journal = {arXiv preprint arXiv:2511.19005},
year = {2025}
}
@inproceedings{xu2022text,
title = {Text is No More Enough! A Benchmark for Profile-Based Spoken Language Understanding},
author = {Xu, Xiao and Qin, Libo and Chen, Kaiji and Wu, Guoxing and Li, Linlin and Che, Wanxiang},
booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
volume = {36},
number = {10},
pages = {11575--11585},
year = {2022}
}
|