Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -28,14 +28,17 @@ tags:
|
|
| 28 |
# CCFQA
|
| 29 |
CCFQA is a speech and text factuality evaluation benchmark that measures language models’ ability to answer short, fact-seeking questions and assess their cross-lingual and cross-modal consistency. It consists of speech and text in 8 languages, containing 1,800 n-way parallel sentences and a total of 14,400 speech samples.
|
| 30 |
- **Language**: Mandarin Chinese, English, French, Japanese, Korean, Russian, Spanish, Cantonese(HK)
|
| 31 |
-
- **ISO-3 Code**: cmn, eng, fra, jpn, kor, rus, spa, yue
|
| 32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
📄Paper:[https://arxiv.org/abs/2508.07295](https://arxiv.org/abs/2508.07295)
|
| 34 |
|
| 35 |
## How to use
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
```python
|
| 40 |
from datasets import load_dataset
|
| 41 |
|
|
@@ -48,16 +51,12 @@ print(ccfqa)
|
|
| 48 |
please visit [github page](https://github.com/yxduir/ccfqa).
|
| 49 |
|
| 50 |
|
| 51 |
-
## License
|
| 52 |
-
|
| 53 |
-
All datasets are licensed under the [Creative Commons Attribution-NonCommercial license (CC-BY-NC)](https://creativecommons.org/licenses/), which allows use, sharing, and adaptation for **non-commercial** purposes only, with proper attribution.
|
| 54 |
-
|
| 55 |
# 🖊Citation
|
| 56 |
|
| 57 |
```
|
| 58 |
@misc{du2025ccfqabenchmarkcrosslingualcrossmodal,
|
| 59 |
-
title={CCFQA: A Benchmark for Cross-Lingual and Cross-Modal Speech and Text Factuality Evaluation},
|
| 60 |
-
author={Yexing Du and Kaiyuan Liu and Youcheng Pan and Zheng Chu and Bo Yang and Xiaocheng Feng and
|
| 61 |
year={2025},
|
| 62 |
eprint={2508.07295},
|
| 63 |
archivePrefix={arXiv},
|
|
|
|
| 28 |
# CCFQA
|
| 29 |
CCFQA is a speech and text factuality evaluation benchmark that measures language models’ ability to answer short, fact-seeking questions and assess their cross-lingual and cross-modal consistency. It consists of speech and text in 8 languages, containing 1,800 n-way parallel sentences and a total of 14,400 speech samples.
|
| 30 |
- **Language**: Mandarin Chinese, English, French, Japanese, Korean, Russian, Spanish, Cantonese(HK)
|
| 31 |
+
- **ISO-3 Code**: cmn, eng, fra, jpn, kor, rus, spa, yue
|
| 32 |
+
- **Data Size**: 14,400 sample
|
| 33 |
+
- **Data Split**: Test
|
| 34 |
+
- **Data Source**: Native speakers (6 males and 6 females)
|
| 35 |
+
- **Domain**: Factuality Evaluation
|
| 36 |
+
- **Task**: Spoken Question Answering(SQA)
|
| 37 |
+
- **License**: CC BY-NC-SA-4.0
|
| 38 |
+
-
|
| 39 |
📄Paper:[https://arxiv.org/abs/2508.07295](https://arxiv.org/abs/2508.07295)
|
| 40 |
|
| 41 |
## How to use
|
|
|
|
|
|
|
|
|
|
| 42 |
```python
|
| 43 |
from datasets import load_dataset
|
| 44 |
|
|
|
|
| 51 |
please visit [github page](https://github.com/yxduir/ccfqa).
|
| 52 |
|
| 53 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
# 🖊Citation
|
| 55 |
|
| 56 |
```
|
| 57 |
@misc{du2025ccfqabenchmarkcrosslingualcrossmodal,
|
| 58 |
+
title={{CCFQA}: A Benchmark for Cross-Lingual and Cross-Modal Speech and Text Factuality Evaluation},
|
| 59 |
+
author={Yexing Du and Kaiyuan Liu and Youcheng Pan and Zheng Chu and Bo Yang and Xiaocheng Feng and Ming Liu and Yang Xiang},
|
| 60 |
year={2025},
|
| 61 |
eprint={2508.07295},
|
| 62 |
archivePrefix={arXiv},
|