Update README.md
Browse files
README.md
CHANGED
|
@@ -17,31 +17,3 @@ configs:
|
|
| 17 |
- split: train
|
| 18 |
path: mmlu_refined.csv
|
| 19 |
---
|
| 20 |
-
|
| 21 |
-
# ๐ EXAONE-4.0-1.2B-Quantized-MMLU Evaluation Results
|
| 22 |
-
|
| 23 |
-
This repository contains the refined evaluation results for the **EXAONE-4.0-1.2B-GPTQ** model using the MMLU benchmark.
|
| 24 |
-
|
| 25 |
-
## ๐ Overview
|
| 26 |
-
The evaluation was conducted to measure the model's multitask language understanding capabilities across 57 different subjects. To ensure stability and formatting consistency, we used a **5-shot** prompting approach.
|
| 27 |
-
|
| 28 |
-
### ๐ป Hardware & Software
|
| 29 |
-
- **Model:** [MangoLab/EXAONE-4.0-1.2B-GPTQ]
|
| 30 |
-
- **Accelerator:** AMD Instinct MI325 OAM
|
| 31 |
-
- **Quantization:** GPTQ (W8A16 / fp8_w8a8)
|
| 32 |
-
- **Framework:** `lm-evaluation-harness`
|
| 33 |
-
|
| 34 |
-
## ๐ Performance Summary
|
| 35 |
-
The table below shows a summary of the evaluation environment. Detailed per-subject accuracy can be viewed in the **Dataset Viewer** above.
|
| 36 |
-
|
| 37 |
-
| Category | Details |
|
| 38 |
-
| :--- | :--- |
|
| 39 |
-
| **Benchmark** | MMLU (Massive Multitask Language Understanding) |
|
| 40 |
-
| **Prompting** | 5-shot |
|
| 41 |
-
| **Evaluation Date** | 2026-01-24 |
|
| 42 |
-
|
| 43 |
-
## ๐ Directory Structure
|
| 44 |
-
- `mmlu_refined.csv`: The main result file containing subject-wise accuracy (%).
|
| 45 |
-
- `raw_data/`: Original JSON output files from the evaluation process for reproducibility.
|
| 46 |
-
|
| 47 |
-
---
|
|
|
|
| 17 |
- split: train
|
| 18 |
path: mmlu_refined.csv
|
| 19 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|