MangoLab commited on
Commit
5a282d0
Β·
verified Β·
1 Parent(s): c2ee5cd

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +36 -16
README.md CHANGED
@@ -7,30 +7,50 @@ tags:
7
  - mmlu
8
  - exaone
9
  - amd-mi325
10
- pretty_name: EXAONE 1.2B Quantized MMLU Results
 
11
  size_categories:
12
  - n<1K
 
 
 
 
 
13
  ---
14
 
15
- # EXAONE-4.0-1.2B-GPTQ MMLU Evaluation
16
 
17
- This repository contains refined evaluation results for the **EXAONE-4.0-1.2B-GPTQ-W8A16** model.
18
 
19
- ## πŸš€ Evaluation Setup
20
- - **Benchmark:** MMLU (Massive Multitask Language Understanding)
21
- - **Prompting:** 5-shot
22
- - **Hardware:** AMD Instinct MI325 OAM
 
 
23
  - **Quantization:** GPTQ (W8A16 / fp8_w8a8)
 
24
 
25
- ## πŸ“Š Quick Results
26
- The full breakdown of 57 subjects can be found in `mmlu_refined.csv`.
27
 
28
- | Metric | Value |
29
  | :--- | :--- |
30
- | **Tool** | lm-evaluation-harness |
31
- | **Accelerator** | AMD MI325 |
32
- | **Status** | Completed |
 
 
 
 
 
33
 
34
- ## πŸ“‚ File Structure
35
- - `mmlu_refined.csv`: Key performance metrics per subject (Accuracy %).
36
- - `raw_data/`: Original JSON output from the evaluation harness.
 
 
 
 
 
 
 
7
  - mmlu
8
  - exaone
9
  - amd-mi325
10
+ - gptq
11
+ pretty_name: EXAONE 4.0 1.2B Quantized MMLU Evaluation
12
  size_categories:
13
  - n<1K
14
+ configs:
15
+ - config_name: default
16
+ data_files:
17
+ - split: train
18
+ path: mmlu_refined.csv
19
  ---
20
 
21
+ # πŸš€ EXAONE-4.0-1.2B-Quantized-MMLU Evaluation Results
22
 
23
+ This repository contains the refined evaluation results for the **EXAONE-4.0-1.2B-GPTQ-W8A16** model using the MMLU benchmark.
24
 
25
+ ## πŸ“Š Overview
26
+ The evaluation was conducted to measure the model's multitask language understanding capabilities across 57 different subjects. To ensure stability and formatting consistency, we used a **5-shot** prompting approach.
27
+
28
+ ### πŸ’» Hardware & Software
29
+ - **Model:** [MangoLab/EXAONE-4.0-1.2B-GPTQ-W8A16](https://huggingface.co/MangoLab/EXAONE-4.0-1.2B-GPTQ-W8A16)
30
+ - **Accelerator:** AMD Instinct MI325 OAM
31
  - **Quantization:** GPTQ (W8A16 / fp8_w8a8)
32
+ - **Framework:** `lm-evaluation-harness`
33
 
34
+ ## πŸ“ˆ Performance Summary
35
+ The table below shows a summary of the evaluation environment. Detailed per-subject accuracy can be viewed in the **Dataset Viewer** above.
36
 
37
+ | Category | Details |
38
  | :--- | :--- |
39
+ | **Benchmark** | MMLU (Massive Multitask Language Understanding) |
40
+ | **Prompting** | 5-shot |
41
+ | **Dtype** | FP8 / W8A16 |
42
+ | **Evaluation Date** | 2026-01-24 |
43
+
44
+ ## πŸ“‚ Directory Structure
45
+ - `mmlu_refined.csv`: The main result file containing subject-wise accuracy (%).
46
+ - `raw_data/`: Original JSON output files from the evaluation process for reproducibility.
47
 
48
+ ---
49
+ ## How to reproduce
50
+ To run the same evaluation, use the following command with `lm-evaluation-harness`:
51
+ ```bash
52
+ accelerate launch -m lm_eval --model hf \
53
+ --model_args pretrained=MangoLab/EXAONE-4.0-1.2B-GPTQ-W8A16,trust_remote_code=True \
54
+ --tasks mmlu \
55
+ --num_fewshot 5 \
56
+ --batch_size auto