| | --- |
| | license: apache-2.0 |
| | language: |
| | - my |
| | pretty_name: Myanmar G12L Benchmar |
| | size_categories: |
| | - n<1K |
| | dataset_info: |
| | features: |
| | - name: title |
| | dtype: string |
| | - name: question |
| | dtype: string |
| | - name: answer |
| | dtype: string |
| | - name: type |
| | dtype: string |
| | - name: option_a |
| | dtype: string |
| | - name: option_b |
| | dtype: string |
| | - name: option_c |
| | dtype: string |
| | splits: |
| | - name: test |
| | num_bytes: 1643102 |
| | num_examples: 962 |
| | download_size: 469838 |
| | dataset_size: 1643102 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: test |
| | path: data/test-* |
| | --- |
| | |
| | # Myanmar G12L Benchmark |
| |
|
| | <!-- Provide a quick summary of the dataset. --> |
| |
|
| | Burmese language matriculation examination questions to benchmark formal knowledge in literature. |
| |
|
| | ## Dataset Details |
| |
|
| | Our Burmese language matriculation examination resource is a comprehensive tool for evaluating and strengthening formal literary knowledge. |
| | It features the following question formats: Short Answer, True or False, Metaphor Analysis, Fill-in-the-Blank, Multiple Choice, Long-Form Response, and Meaning Interpretation. |
| | Each question has been extracted from past examination papers and authoritative exam guides |
| |
|
| | - **Curated by:** [Pyae Sone Myo](https://www.linkedin.com/pyaesonemyo), [Min Thein Kyaw](https://www.instagram.com/jerrytheinkyaw?igsh=OGQ5ZDc2ODk2ZA==), [May Myat Noe Aung](https://www.linkedin.com/in/may-myat-noe-aung/), [Arkar Zaw](https://linkedin.com/in/ar-kar-zaw-6885b720b) |
| | - **Language(s) (NLP):** Burmese |
| | - **License:** Apache 2.0 |
| |
|
| |
|
| | ## Evaluation |
| |
|
| | <!-- Provide the basic links for the dataset. --> |
| | To evaluate models on this benchmark, you can use the `ayamytk` (Aya Myanmar Toolkit) that was originally developed for running this benchmark. |
| |
|
| | 1. Install the toolkit directly. |
| | ```py |
| | !pip install git+https://github.com/Rickaym/aya-my-tk |
| | ``` |
| |
|
| | 2. Run the `ExamEval` |
| | ```py |
| | from ayamytk.test.bench import evals |
| | from ayamytk.test.bench.sampler.custom_sampler import CustomSampler |
| | |
| | def chat(messages): |
| | # Add your inference code here |
| | return ... |
| | |
| | evals.run(samplers={"your-model": CustomSampler(chat=chat)}, evals="mg12l") |
| | ``` |
| |
|
| |
|
| | ## Dataset Structure |
| |
|
| | **Schema overview** |
| | This dataset captures individual exam items with seven core fields: |
| |
|
| | * **title**: a brief non-unique identifier for the question |
| | * **question**: the full prompt or stem |
| | * **answer**: the correct response |
| | * **type**: one of `MCQ`, `TOF`, `FIB`, `SHORT_QNA`, `LONG_QNA`, `MEANING_QNA`, or `METAPHOR_QNA` |
| | * **option\_a**, **option\_b**, **option\_c**: the three distractors for multiple-choice items (populated only when `type = MCQ`; otherwise left blank) |
| | |
| | Each row corresponds to a single question, and non-MCQ entries simply omit the option fields. |
| | |
| | ### Source Data |
| | |
| | #### Data Collection and Processing |
| | |
| | Steps: |
| | 1. Google Document OCR |
| | 2. Manual Extraction and Correction |
| | |
| | |
| | ## Bias, Risks, and Limitations |
| | |
| | <!-- This section is meant to convey both technical and sociotechnical limitations. --> |
| | |
| | This dataset only captures the literature subject of the matriculation exam. |