license: apache-2.0
language:
- my
pretty_name: Myanmar G12L Benchmar
size_categories:
- n<1K
dataset_info:
features:
- name: title
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: type
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
splits:
- name: test
num_bytes: 1643102
num_examples: 962
download_size: 469838
dataset_size: 1643102
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
Myanmar G12L Benchmark
Burmese language matriculation examination questions to benchmark formal knowledge in literature.
Dataset Details
Our Burmese language matriculation examination resource is a comprehensive tool for evaluating and strengthening formal literary knowledge. It features the following question formats: Short Answer, True or False, Metaphor Analysis, Fill-in-the-Blank, Multiple Choice, Long-Form Response, and Meaning Interpretation. Each question has been extracted from past examination papers and authoritative exam guides
- Curated by: Pyae Sone Myo, Min Thein Kyaw, May Myat Noe Aung, Arkar Zaw
- Language(s) (NLP): Burmese
- License: Apache 2.0
Evaluation
To evaluate models on this benchmark, you can use the ayamytk (Aya Myanmar Toolkit) that was originally developed for running this benchmark.
- Install the toolkit directly.
!pip install git+https://github.com/Rickaym/aya-my-tk
- Run the
ExamEval
from ayamytk.test.bench import evals
from ayamytk.test.bench.sampler.custom_sampler import CustomSampler
def chat(messages):
# Add your inference code here
return ...
evals.run(samplers={"your-model": CustomSampler(chat=chat)}, evals="mg12l")
Dataset Structure
Schema overview This dataset captures individual exam items with seven core fields:
- title: a brief non-unique identifier for the question
- question: the full prompt or stem
- answer: the correct response
- type: one of
MCQ,TOF,FIB,SHORT_QNA,LONG_QNA,MEANING_QNA, orMETAPHOR_QNA - option_a, option_b, option_c: the three distractors for multiple-choice items (populated only when
type = MCQ; otherwise left blank)
Each row corresponds to a single question, and non-MCQ entries simply omit the option fields.
Source Data
Data Collection and Processing
Steps:
- Google Document OCR
- Manual Extraction and Correction
Bias, Risks, and Limitations
This dataset only captures the literature subject of the matriculation exam.