File size: 2,824 Bytes
f08b270
40bded0
 
 
 
d45a659
 
 
4385bf3
40bded0
 
 
d45a659
4385bf3
d45a659
4385bf3
40bded0
 
 
d45a659
4385bf3
d45a659
4385bf3
40bded0
 
 
d45a659
4385bf3
d45a659
4385bf3
40bded0
 
 
d45a659
4385bf3
d45a659
f08b270
 
 
 
 
 
 
 
 
4ef4df9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
configs:
  - config_name: kyrgyz_rc_all
    data_files:
      - split: validation
        path: kyrgyz_rc_all/validation-00000-of-00001.parquet
      - split: test
        path: kyrgyz_rc_all/test-00000-of-00001.parquet

  - config_name: kyrgyz_rc_literature
    data_files:
      - split: validation
        path: kyrgyz_rc_literature/validation-00000-of-00001.parquet
      - split: test
        path: kyrgyz_rc_literature/test-00000-of-00001.parquet

  - config_name: kyrgyz_rc_math
    data_files:
      - split: validation
        path: kyrgyz_rc_math/validation-00000-of-00001.parquet
      - split: test
        path: kyrgyz_rc_math/test-00000-of-00001.parquet

  - config_name: kyrgyz_rc_news
    data_files:
      - split: validation
        path: kyrgyz_rc_news/validation-00000-of-00001.parquet
      - split: test
        path: kyrgyz_rc_news/test-00000-of-00001.parquet

  - config_name: kyrgyz_rc_wiki
    data_files:
      - split: validation
        path: kyrgyz_rc_wiki/validation-00000-of-00001.parquet
      - split: test
        path: kyrgyz_rc_wiki/test-00000-of-00001.parquet
tags:
- kyrgyz
- llm-benchmark
- evaluation
- reading-comprehension
- kyrgyzllm-bench
language:
- ky
---

# KyrgyzRC (Kyrgyz Reading Comprehension)

**KyrgyzRC** is a native, multiple-choice reading comprehension benchmark for the Kyrgyz language. It contains **400 questions** designed to test factual understanding, inference, and reasoning.

## 🏔️ Part of the KyrgyzLLM-Bench

This dataset is a core component of the **KyrgyzLLM-Bench**, a comprehensive suite for evaluating LLMs in Kyrgyz.

* **Main Paper:** [Bridging the Gap in Less-Resourced Languages: Building a Benchmark for Kyrgyz Language Models](https://ieeexplore.ieee.org/document/11206960)
* **Hugging Face Hub:** [https://huggingface.co/TTimur](https://huggingface.co/TTimur)
* **GitHub Project:** [https://github.com/golden-ratio/kyrgyzLLM_bench](https://github.com/golden-ratio/kyrgyzLLM_bench)

## 📋 Dataset Description

This benchmark evaluates a model's ability to understand and reason about text passages. Each question has 4 options, with 1 correct answer.

* **Sources:** Passages are sourced from Kyrgyz Wikipedia, national news articles, literature, and school-style math word problems.
* **Skills Evaluated:** Factual understanding, inference, vocabulary-in-context, and multi-sentence reasoning.

## 📜 Citation

If you find this dataset useful in your research, please cite the main project paper:

```bibtex
@article{KyrgyzLLM-Bench,
  title={Bridging the Gap in Less-Resourced Languages: Building a Benchmark for Kyrgyz Language Models},
  author={Timur Turatali, Aida Turdubaeva, Islam Zhenishbekov, Zhoomart Suranbaev, Anton Alekseev, Rustem Izmailov},
  year={2025},
  url={[https://huggingface.co/datasets/TTimur/kyrgyzRC]}
}