Luganda Linguistic Knowledge (LLK) Benchmark
Tests whether the model actually knows the Luganda language rules it is supposed to teach. Structured around CEFR levels with 75% of questions at foundational levels (A1–B1), heavily weighted toward Morphology & Concord (30%) and Syntax (25%) given Luganda's 12-noun-class agreement system. Includes C1–C2 stress tests for cultural context and advanced grammar. 100 mixed questions per language: multiple-choice (51), short-form (47), and true/false (2).
Configurations
This dataset has two language configurations:
english— questions written in English (about Luganda)luganda— same questions translated into Luganda
Columns
| Column | Description |
|---|---|
id |
Stable question identifier (e.g. LUG_RET_001) |
category |
Linguistic category (e.g. Phonics & Orthography, Morphology & Concord) |
question |
Question text (options inlined for multiple-choice items) |
answer_type |
multiple_choice, short_form, or true_false |
expected_answers |
List of acceptable answers |
rubric_id |
Scoring rubric (e.g. multiple_choice, exact_match) |
source_ref |
References to linguistic sources (e.g. Handbook of Luganda) |
framework_tag |
SVR (Simple View of Reading) component being assessed |
linguistic_feature |
Specific linguistic feature targeted |
proficiency_level |
CEFR level (A1, A2, B1, B2, C1, C2) |
Source & methodology
Built by AI for Education as part of the Small Language Model finetuning project with Crane AI Labs. See the source repository for the full evaluation harness, scoring code, and reproducibility notes.
- Downloads last month
- 40