|
|
--- |
|
|
language: |
|
|
- ar |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- multiple-choice |
|
|
- question-answering |
|
|
pretty_name: Arabic Accounting MCQ Evaluation Dataset |
|
|
tags: |
|
|
- accounting |
|
|
- mcq |
|
|
- arabic |
|
|
- evaluation |
|
|
- benchmark |
|
|
- education |
|
|
dataset_info: |
|
|
features: |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: query |
|
|
dtype: string |
|
|
- name: answer |
|
|
dtype: string |
|
|
- name: text |
|
|
dtype: string |
|
|
- name: choices |
|
|
list: string |
|
|
- name: gold |
|
|
dtype: int64 |
|
|
splits: |
|
|
- name: test |
|
|
num_bytes: 379104 |
|
|
num_examples: 167 |
|
|
download_size: 113685 |
|
|
dataset_size: 379104 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: test |
|
|
path: data/test-* |
|
|
--- |
|
|
|
|
|
# Arabic Accounting MCQ Evaluation Dataset |
|
|
|
|
|
Validation and test splits for Arabic accounting MCQ with English letter choices. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
- **Format**: Multiple choice questions (4 options) |
|
|
- **Language**: Arabic questions with English letter choices |
|
|
- **Domain**: Accounting and finance |
|
|
- **Validation**: 10% of total dataset |
|
|
- **Test**: 10% of total dataset |
|
|
|
|
|
## Fields |
|
|
|
|
|
- `id`: Unique identifier |
|
|
- `query`: Full MCQ prompt |
|
|
- `answer`: Correct answer letter (a, b, c, d) |
|
|
- `text`: Question text |
|
|
- `choices`: Options ['a', 'b', 'c', 'd'] |
|
|
- `gold`: Correct answer index (0-3) |
|
|
|
|
|
## Answer Mapping |
|
|
|
|
|
- a → gold: 0 |
|
|
- b → gold: 1 |
|
|
- c → gold: 2 |
|
|
- d → gold: 3 |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("SahmBenchmark/arabic-accounting-mcq_eval") |
|
|
|
|
|
# Access splits |
|
|
val_data = dataset['validation'] |
|
|
test_data = dataset['test'] |
|
|
|
|
|
# Evaluation |
|
|
correct = 0 |
|
|
for example in test_data: |
|
|
model_answer = model.generate(example['query']) |
|
|
if model_answer == example['answer']: |
|
|
correct += 1 |
|
|
|
|
|
accuracy = correct / len(test_data) |
|
|
``` |
|
|
|
|
|
For training data, see: `SahmBenchmark/arabic-accounting-mcq_train` |
|
|
|