|
|
--- |
|
|
language: |
|
|
- zho |
|
|
- eng |
|
|
- fra |
|
|
- jpn |
|
|
- kor |
|
|
- rus |
|
|
- spa |
|
|
- yue |
|
|
license: cc-by-nc-sa-4.0 |
|
|
task_categories: |
|
|
- question-answering |
|
|
- audio-text-to-text |
|
|
pretty_name: CCFQA |
|
|
library_name: datasets |
|
|
tags: |
|
|
- factuality |
|
|
- evaluation |
|
|
--- |
|
|
|
|
|
# CCFQA |
|
|
CCFQA is a speech and text factuality evaluation benchmark that measures language models’ ability to answer short, fact-seeking questions and assess their cross-lingual and cross-modal consistency. It consists of speech and text in 8 languages, containing 1,800 n-way parallel sentences and a total of 14,400 speech samples. |
|
|
- **Language**: Mandarin Chinese, English, French, Japanese, Korean, Russian, Spanish, Cantonese(HK) |
|
|
- **ISO-3 Code**: cmn, eng, fra, jpn, kor, rus, spa, yue |
|
|
- **Data Size**: 14,400 sample |
|
|
- **Data Split**: Test |
|
|
- **Data Source**: Native speakers (6 males and 6 females) |
|
|
- **Domain**: Factuality Evaluation |
|
|
- **Task**: Spoken Question Answering(SQA) |
|
|
- **License**: CC BY-NC-SA-4.0 |
|
|
|
|
|
📄Paper:[https://arxiv.org/abs/2508.07295](https://arxiv.org/abs/2508.07295) |
|
|
|
|
|
## How to use |
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
ccfqa = load_dataset("yxdu/ccfqa") |
|
|
print(ccfqa) |
|
|
``` |
|
|
|
|
|
## ⚖️ Evals |
|
|
|
|
|
please visit [github page](https://github.com/yxduir/ccfqa). |
|
|
|
|
|
|
|
|
# 🖊Citation |
|
|
|
|
|
``` |
|
|
@misc{du2025ccfqabenchmarkcrosslingualcrossmodal, |
|
|
title={{CCFQA}: A Benchmark for Cross-Lingual and Cross-Modal Speech and Text Factuality Evaluation}, |
|
|
author={Yexing Du and Kaiyuan Liu and Youcheng Pan and Zheng Chu and Bo Yang and Xiaocheng Feng and Ming Liu and Yang Xiang}, |
|
|
year={2025}, |
|
|
eprint={2508.07295}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CL}, |
|
|
url={https://arxiv.org/abs/2508.07295}, |
|
|
} |
|
|
``` |