Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -7,30 +7,50 @@ tags:
|
|
| 7 |
- mmlu
|
| 8 |
- exaone
|
| 9 |
- amd-mi325
|
| 10 |
-
|
|
|
|
| 11 |
size_categories:
|
| 12 |
- n<1K
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
---
|
| 14 |
|
| 15 |
-
# EXAONE-4.0-1.2B-
|
| 16 |
|
| 17 |
-
This repository contains refined evaluation results for the **EXAONE-4.0-1.2B-GPTQ-W8A16** model.
|
| 18 |
|
| 19 |
-
##
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
|
|
|
|
|
|
| 23 |
- **Quantization:** GPTQ (W8A16 / fp8_w8a8)
|
|
|
|
| 24 |
|
| 25 |
-
##
|
| 26 |
-
The
|
| 27 |
|
| 28 |
-
|
|
| 29 |
| :--- | :--- |
|
| 30 |
-
| **
|
| 31 |
-
| **
|
| 32 |
-
| **
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
- mmlu
|
| 8 |
- exaone
|
| 9 |
- amd-mi325
|
| 10 |
+
- gptq
|
| 11 |
+
pretty_name: EXAONE 4.0 1.2B Quantized MMLU Evaluation
|
| 12 |
size_categories:
|
| 13 |
- n<1K
|
| 14 |
+
configs:
|
| 15 |
+
- config_name: default
|
| 16 |
+
data_files:
|
| 17 |
+
- split: train
|
| 18 |
+
path: mmlu_refined.csv
|
| 19 |
---
|
| 20 |
|
| 21 |
+
# π EXAONE-4.0-1.2B-Quantized-MMLU Evaluation Results
|
| 22 |
|
| 23 |
+
This repository contains the refined evaluation results for the **EXAONE-4.0-1.2B-GPTQ-W8A16** model using the MMLU benchmark.
|
| 24 |
|
| 25 |
+
## π Overview
|
| 26 |
+
The evaluation was conducted to measure the model's multitask language understanding capabilities across 57 different subjects. To ensure stability and formatting consistency, we used a **5-shot** prompting approach.
|
| 27 |
+
|
| 28 |
+
### π» Hardware & Software
|
| 29 |
+
- **Model:** [MangoLab/EXAONE-4.0-1.2B-GPTQ-W8A16](https://huggingface.co/MangoLab/EXAONE-4.0-1.2B-GPTQ-W8A16)
|
| 30 |
+
- **Accelerator:** AMD Instinct MI325 OAM
|
| 31 |
- **Quantization:** GPTQ (W8A16 / fp8_w8a8)
|
| 32 |
+
- **Framework:** `lm-evaluation-harness`
|
| 33 |
|
| 34 |
+
## π Performance Summary
|
| 35 |
+
The table below shows a summary of the evaluation environment. Detailed per-subject accuracy can be viewed in the **Dataset Viewer** above.
|
| 36 |
|
| 37 |
+
| Category | Details |
|
| 38 |
| :--- | :--- |
|
| 39 |
+
| **Benchmark** | MMLU (Massive Multitask Language Understanding) |
|
| 40 |
+
| **Prompting** | 5-shot |
|
| 41 |
+
| **Dtype** | FP8 / W8A16 |
|
| 42 |
+
| **Evaluation Date** | 2026-01-24 |
|
| 43 |
+
|
| 44 |
+
## π Directory Structure
|
| 45 |
+
- `mmlu_refined.csv`: The main result file containing subject-wise accuracy (%).
|
| 46 |
+
- `raw_data/`: Original JSON output files from the evaluation process for reproducibility.
|
| 47 |
|
| 48 |
+
---
|
| 49 |
+
## How to reproduce
|
| 50 |
+
To run the same evaluation, use the following command with `lm-evaluation-harness`:
|
| 51 |
+
```bash
|
| 52 |
+
accelerate launch -m lm_eval --model hf \
|
| 53 |
+
--model_args pretrained=MangoLab/EXAONE-4.0-1.2B-GPTQ-W8A16,trust_remote_code=True \
|
| 54 |
+
--tasks mmlu \
|
| 55 |
+
--num_fewshot 5 \
|
| 56 |
+
--batch_size auto
|