Update README.md
Browse files
README.md
CHANGED
|
@@ -20,13 +20,13 @@ configs:
|
|
| 20 |
|
| 21 |
# π EXAONE-4.0-1.2B-Quantized-MMLU Evaluation Results
|
| 22 |
|
| 23 |
-
This repository contains the refined evaluation results for the **EXAONE-4.0-1.2B-GPTQ
|
| 24 |
|
| 25 |
## π Overview
|
| 26 |
The evaluation was conducted to measure the model's multitask language understanding capabilities across 57 different subjects. To ensure stability and formatting consistency, we used a **5-shot** prompting approach.
|
| 27 |
|
| 28 |
### π» Hardware & Software
|
| 29 |
-
- **Model:** [MangoLab/EXAONE-4.0-1.2B-GPTQ
|
| 30 |
- **Accelerator:** AMD Instinct MI325 OAM
|
| 31 |
- **Quantization:** GPTQ (W8A16 / fp8_w8a8)
|
| 32 |
- **Framework:** `lm-evaluation-harness`
|
|
@@ -38,7 +38,6 @@ The table below shows a summary of the evaluation environment. Detailed per-subj
|
|
| 38 |
| :--- | :--- |
|
| 39 |
| **Benchmark** | MMLU (Massive Multitask Language Understanding) |
|
| 40 |
| **Prompting** | 5-shot |
|
| 41 |
-
| **Dtype** | FP8 / W8A16 |
|
| 42 |
| **Evaluation Date** | 2026-01-24 |
|
| 43 |
|
| 44 |
## π Directory Structure
|
|
@@ -46,11 +45,3 @@ The table below shows a summary of the evaluation environment. Detailed per-subj
|
|
| 46 |
- `raw_data/`: Original JSON output files from the evaluation process for reproducibility.
|
| 47 |
|
| 48 |
---
|
| 49 |
-
## How to reproduce
|
| 50 |
-
To run the same evaluation, use the following command with `lm-evaluation-harness`:
|
| 51 |
-
```bash
|
| 52 |
-
accelerate launch -m lm_eval --model hf \
|
| 53 |
-
--model_args pretrained=MangoLab/EXAONE-4.0-1.2B-GPTQ-W8A16,trust_remote_code=True \
|
| 54 |
-
--tasks mmlu \
|
| 55 |
-
--num_fewshot 5 \
|
| 56 |
-
--batch_size auto
|
|
|
|
| 20 |
|
| 21 |
# π EXAONE-4.0-1.2B-Quantized-MMLU Evaluation Results
|
| 22 |
|
| 23 |
+
This repository contains the refined evaluation results for the **EXAONE-4.0-1.2B-GPTQ** model using the MMLU benchmark.
|
| 24 |
|
| 25 |
## π Overview
|
| 26 |
The evaluation was conducted to measure the model's multitask language understanding capabilities across 57 different subjects. To ensure stability and formatting consistency, we used a **5-shot** prompting approach.
|
| 27 |
|
| 28 |
### π» Hardware & Software
|
| 29 |
+
- **Model:** [MangoLab/EXAONE-4.0-1.2B-GPTQ]
|
| 30 |
- **Accelerator:** AMD Instinct MI325 OAM
|
| 31 |
- **Quantization:** GPTQ (W8A16 / fp8_w8a8)
|
| 32 |
- **Framework:** `lm-evaluation-harness`
|
|
|
|
| 38 |
| :--- | :--- |
|
| 39 |
| **Benchmark** | MMLU (Massive Multitask Language Understanding) |
|
| 40 |
| **Prompting** | 5-shot |
|
|
|
|
| 41 |
| **Evaluation Date** | 2026-01-24 |
|
| 42 |
|
| 43 |
## π Directory Structure
|
|
|
|
| 45 |
- `raw_data/`: Original JSON output files from the evaluation process for reproducibility.
|
| 46 |
|
| 47 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|