File size: 1,794 Bytes
58ceecb
 
 
 
 
 
 
 
 
 
843415c
58ceecb
 
843415c
 
 
 
 
 
 
 
 
 
 
 
58ceecb
 
df72266
 
 
 
 
7582370
df72266
 
 
 
 
 
 
 
 
 
 
 
58ceecb
 
 
 
 
df72266
 
 
 
e5062c3
df72266
 
7582370
 
 
 
 
df72266
e5062c3
7582370
df72266
843415c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
language:
- cmn
- eng
- fra
- jpn
- kor
- rus
- spa
- yue
license: cc-by-nc-sa-4.0
size_categories:
- 10K<n<100K
task_categories:
- question-answering
- audio-text-to-text
pretty_name: CCFQA
library_name: datasets
tags:
- multilingual
- cross-modal
- speech
- text
- factuality
- evaluation
---

# CCFQA
CCFQA is a speech and text factuality evaluation benchmark that measures language models’ ability to answer short, fact-seeking questions and assess their cross-lingual and cross-modal consistency. It consists of speech and text in 8 languages, containing 1,800 n-way parallel sentences and a total of 14,000 speech samples.
- **Language**: Mandarin Chinese, English, French, Japanese, Korean, Russian, Spanish, Cantonese(HK)
- **ISO-3 Code**: cmn, eng, fra, jpn, kor, rus, spa, yue 

📄Paper:[https://arxiv.org/abs/2508.07295](https://arxiv.org/abs/2508.07295)

## How to use



```python
from datasets import load_dataset

ccfqa = load_dataset("yxdu/ccfqa")
print(ccfqa)
```

## ⚖️ Evals

please visit [github page](https://github.com/yxduir/ccfqa).


## License

All datasets are licensed under the [Creative Commons Attribution-NonCommercial license (CC-BY-NC)](https://creativecommons.org/licenses/), which allows use, sharing, and adaptation for **non-commercial** purposes only, with proper attribution.

# 🖊Citation

```
@misc{du2025ccfqabenchmarkcrosslingualcrossmodal,
      title={CCFQA: A Benchmark for Cross-Lingual and Cross-Modal Speech and Text Factuality Evaluation}, 
      author={Yexing Du and Kaiyuan Liu and Youcheng Pan and Zheng Chu and Bo Yang and Xiaocheng Feng and Yang Xiang and Ming Liu},
      year={2025},
      eprint={2508.07295},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2508.07295}, 
}
```