language:
- cv
license: cc0-1.0
dataset_info:
features:
- name: number
dtype: string
- name: text
dtype: string
- name: question
dtype: string
- name: doc_number
dtype: string
- name: doc_name
dtype: string
- name: answer
dtype: string
- name: options
sequence: string
- name: answer_letter
dtype: string
splits:
- name: train
num_bytes: 53982
num_examples: 100
download_size: 31521
dataset_size: 53982
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset Summary
Recent advances in LLMs have driven remarkable progress, yet their performance remains inconsistent on low-resource languages, highlighting challenges in equitable AI development.
This dataset demonstrates that while LLMs improve in Chuvash language understanding, fact-based knowledge about Chuvash literature remains a significant unresolved challenge.
The dataset is comprised of 100 questions designed to assess factual knowledge of Chuvash literature. The objective is to identify the name of a character based on a provided piece of text.
Models
- Gemma: gemma-3-27b-it
- Gemini: gemini-2.5-flash
- Claude: claude-sonnet-4
- GPT: gpt-4.1
Evaluation
To assess the quality, four approaches are utilized:
- contains: an open-ended question.
- contains_literature: an open-ended question which clarifies that the text is a piece from Chuvash literature.
- contains_book: an open-ended question that also provides the title of the book the excerpt is from.
- options: a question with 10 answer options.
def make_prompt_contains(text: str, question: str, lang: str) -> str:
prompt = {
"en": f"Text:\n\n{text}\n\nQuestion:\n\n{question}\n\n",
"ru": f"Текст:\n\n{text}\n\nВопрос:\n\n{question}\n\n",
"cv": f"Текст:\n\n{text}\n\nЫйту:\n\n{question}\n\n"
}
return prompt[lang]
def make_prompt_contains_literature(text: str, question: str, lang: str) -> str:
prompt = {
"en": f"Here is the text from british or american literature. Guess which book it's from and answer the question.\n\nText:\n\n{text}\n\nQuestion:\n\n{question}\n\n",
"ru": f"Дан отрывок текста из русской литературы. Предположи, из какой книги и ответь на вопрос.\n\nТекст:\n\n{text}\n\nВопрос:\n\n{question}\n\n",
"cv": f"Чӑваш литературин текст сыпӑкӗ панӑ. Текста хӑш кӗнекерен ҫырнине кала та ыйту ҫине ответле.\n\nТекст:\n\n{text}\n\nЫйту:\n\n{question}\n\n"
}
return prompt[lang]
def make_prompt_contains_book(text: str, question: str, book: str, lang: str) -> str:
prompt = {
"en": f"Here is the text from british or american literature. Based on book name please answer the question.\n\nBook name:{book}\n\nText:\n\n{text}\n\nQuestion:\n\n{question}\n\n",
"ru": f"Дан отрывок текста из русской литературы. С учетом того, что название книги известно, ответь на вопрос.\n\nНазвание книги:{book}\n\nТекст:\n\n{text}\n\nВопрос:\n\n{question}\n\n",
"cv": f"Чӑваш литературин текст сыпӑкӗ панӑ. Кӗнеке ячӗ паллӑ пулнине шута илсе ыйту ҫине ответле.\n\nКӗнеке ячӗ:{book}\n\nТекст:\n\n{text}\n\nЫйту:\n\n{question}\n\n"
}
return prompt[lang]
def convert_option_in_prompt_format(options: str) -> str:
result = ''
for i, option in enumerate(options):
result += f'{chr(65 + i)}: {option}\n'
return result.strip()
def make_prompt_options(text: str, question: str, options: str, lang: str) -> str:
prompt_options = convert_option_in_prompt_format(options)
prompt = {
"en": f"Text:\n\n{text}\n\nQuestion:\n\n{question}\n\nSelect the correct answer from the options. It is IMPORTANT to write ONLY the answer letter in response:\n\n{prompt_options}\n\n",
"ru": f"Текст:\n\n{text}\n\nВопрос:\n\n{question}\n\nВыбери правильный ответ из предложенных вариантов. ВАЖНО написать в ответе ТОЛЬКО букву ответа:\n\n{prompt_options}\n\n",
"cv": f"Текст:\n\n{text}\n\nЫйту:\n\n{question}\n\nСӗннӗ вариантсенчен тӗрӗс хурав суйла. Хуравра хурав саспалли ҪЕҪ ҫырни ПӖЛТЕРӖШЛӖ:\n\n{prompt_options}\n\n"
}
return prompt[lang]
Languages
For the purpose of comparative analysis, we also gathered examples in high-resource languages: English and Russian (10 examples per language), all presented in an analogous format.
Additional dataset: chuvash_llm_testset_ru_en
Results
All model answers are opened: see result folder
File name looks like {model_name}.{eval_type}.{language}.txt ordered by number field
Best results
Best results were reached with contains_book evaluation type. See the figure at the top
| Model | Language | True(%) |
|---|---|---|
| Gemma | en | 60 |
| Gemini | en | 60 |
| Claude | en | 90 |
| GPT | en | 100 |
| Gemma | ru | 60 |
| Gemini | ru | 50 |
| Claude | ru | 70 |
| GPT | ru | 80 |
| Gemma | cv | 2 |
| Gemini | cv | 1 |
| Claude | cv | 4 |
| GPT | cv | 1 |
Other results
contains
| Model | Language | True(%) |
|---|---|---|
| Gemma | en | 0 |
| Gemini | en | 10 |
| Claude | en | 40 |
| GPT | en | 100 |
| Gemma | ru | 0 |
| Gemini | ru | 20 |
| Claude | ru | 30 |
| GPT | ru | 50 |
| Gemma | cv | 1 |
| Gemini | cv | 1 |
| Claude | cv | 1 |
| GPT | cv | 3 |
contains_literature
| Model | Language | True(%) |
|---|---|---|
| Gemma | en | 40 |
| Gemini | en | 70 |
| Claude | en | 90 |
| GPT | en | 100 |
| Gemma | ru | 20 |
| Gemini | ru | 60 |
| Claude | ru | 70 |
| GPT | ru | 70 |
| Gemma | cv | 2 |
| Gemini | cv | 4 |
| Claude | cv | 4 |
| GPT | cv | 1 |
options
| Model | Language | True(%) |
|---|---|---|
| Gemma | en | 20 |
| Gemini | en | 40 |
| Claude | en | 60 |
| GPT | en | 80 |
| Gemma | ru | 0 |
| Gemini | ru | 30 |
| Claude | ru | 30 |
| GPT | ru | 50 |
| Gemma | cv | 0 |
| Gemini | cv | 5 |
| Claude | cv | 0 |
| GPT | cv | 0 |
Code
Access to models via OpenRouter
import requests
import json
OPENROUTER_API_KEY = "YOUR KEY"
def evaluate_openrouter(prompt: str, model_name: str) -> str:
response = requests.post(
url="https://openrouter.ai/api/v1/chat/completions",
headers={
"Authorization": f"Bearer {OPENROUTER_API_KEY}",
## "HTTP-Referer": f"{YOUR_SITE_URL}", # Optional, for including your app on openrouter.ai rankings.
## "X-Title": f"{YOUR_APP_NAME}", # Optional. Shows in rankings on openrouter.ai.
},
data=json.dumps({
"model": model_name,
"messages": [
{ "role": "user", "content": prompt }
]
})
)
data = response.json()
# print(data)
return data["choices"][0]["message"]["content"]
Check correct
def check_correct_contains(answer: str, llm_answer: str) -> bool:
return answer in llm_answer
def check_correct_options(answer: str, llm_answer: str, options: str) -> bool:
if llm_answer.isalpha() and 0 <= ord(llm_answer.lower()) - ord('a') < len(options):
model_choice = options[ord(llm_answer.lower()) - ord('a')]
return model_choice == answer
else:
return False
def check_correct(eval_mode: str, json_object: dict) -> bool:
if eval_mode in ("contains", "contains_literature", "contains_book"):
answer = json_object["answer"]
llm_answer = json_object["llm_answer"]
return check_correct_contains(answer, llm_answer)
elif eval_mode == "options":
answer = json_object["answer"]
llm_answer = json_object["llm_answer"]
options = json_object["options"]
return check_correct_options(answer, llm_answer, options)
else:
raise ValueError("Invalid eval_mode: %s" % eval_mode)
