comoZ commited on
Commit
901828a
·
verified ·
1 Parent(s): f33e3be

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1154 -171
README.md CHANGED
@@ -1,199 +1,1182 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
2
  library_name: transformers
3
- tags: []
4
  ---
5
 
6
- # Model Card for Model ID
 
 
 
 
 
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
11
 
12
- ## Model Details
13
 
14
- ### Model Description
 
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
 
 
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
- ### Downstream Use [optional]
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
 
 
 
 
 
 
 
 
49
 
50
- [More Information Needed]
 
51
 
52
- ### Out-of-Scope Use
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
 
56
- [More Information Needed]
 
57
 
58
- ## Bias, Risks, and Limitations
 
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: other
3
+ license_name: exaone
4
+ license_link: LICENSE
5
+ language:
6
+ - en
7
+ - ko
8
+ - es
9
+ tags:
10
+ - lg-ai
11
+ - exaone
12
+ - exaone-4.0
13
+ pipeline_tag: text-generation
14
  library_name: transformers
 
15
  ---
16
 
17
+ <p align="center">
18
+ <img src="assets/EXAONE_Symbol+BI_3d.png", width="300", style="margin: 40 auto;">
19
+ 🎉 License Updated! We are pleased to announce our more flexible licensing terms 🤗
20
+ <br>✈️ Try on <a href="https://friendli.ai/suite/~/serverless-endpoints/LGAI-EXAONE/EXAONE-4.0-32B/overview">FriendliAI</a> (licensed under commercial purposes)
21
+ <br><br><i>📢 EXAONE 4.0 is officially supported by HuggingFace transformers! Please check out the guide <a href="#quickstart">below</a></i>
22
+ <br>
23
 
24
+ # EXAONE-4.0-32B
25
 
26
+ ## Introduction
27
 
28
+ EXAONE 4.0 bnb 4bit 모델
29
 
30
+ https://huggingface.co/LGAI-EXAONE/EXAONE-4.0-32B
31
 
32
+ We introduce **EXAONE 4.0**, which integrates a **Non-reasoning mode** and **Reasoning mode** to achieve both the excellent usability of [EXAONE 3.5](https://github.com/LG-AI-EXAONE/EXAONE-3.5) and the advanced reasoning abilities of [EXAONE Deep](https://github.com/LG-AI-EXAONE/EXAONE-Deep). To pave the way for the agentic AI era, EXAONE 4.0 incorporates essential features such as agentic tool use, and its multilingual capabilities are extended
33
+ to support Spanish in addition to English and Korean.
34
 
35
+ The EXAONE 4.0 model series consists of two sizes: a mid-size **32B** model optimized for high performance, and a small-size **1.2B** model designed for on-device applications.
36
 
37
+ In the EXAONE 4.0 architecture, we apply new architectural changes compared to previous EXAONE models as below:
38
 
39
+ 1. **Hybrid Attention**: For the 32B model, we adopt hybrid attention scheme, which combines *Local attention (sliding window attention)* with *Global attention (full attention)* in a 3:1 ratio. We do not use RoPE (Rotary Positional Embedding) for global attention for better global context understanding.
40
+ 2. **QK-Reorder-Norm**: We reorder the LayerNorm position from the traditional Pre-LN scheme by applying LayerNorm directly to the attention and MLP outputs, and we add RMS normalization right after the Q and K projection. It helps yield better performance on downstream tasks despite consuming more computation.
 
 
 
 
 
41
 
42
+ For more details, please refer to our [technical report](https://arxiv.org/abs/2507.11407), [HuggingFace paper](https://huggingface.co/papers/2507.11407), [blog](https://www.lgresearch.ai/blog/view?seq=576), and [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-4.0).
43
 
 
44
 
45
+ ### Model Configuration
 
 
46
 
47
+ - Number of Parameters (without embeddings): 30.95B
48
+ - Number of Layers: 64
49
+ - Number of Attention Heads: GQA with 40-heads and 8-KV heads
50
+ - Vocab Size: 102,400
51
+ - Context Length: 131,072 tokens
52
 
 
53
 
54
+ ## Quickstart
55
 
56
+ You should install the transformers library with version >= `4.54.0`.
57
 
58
+ ### Non-reasoning mode
59
+
60
+ For general use, you can use the EXAONE 4.0 models with the following example:
61
+
62
+ ```python
63
+ from transformers import AutoModelForCausalLM, AutoTokenizer
64
+
65
+ model_name = "LGAI-EXAONE/EXAONE-4.0-32B"
66
+
67
+ model = AutoModelForCausalLM.from_pretrained(
68
+ model_name,
69
+ torch_dtype="bfloat16",
70
+ device_map="auto"
71
+ )
72
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
73
+
74
+ # choose your prompt
75
+ prompt = "Explain how wonderful you are"
76
+ prompt = "Explica lo increíble que eres"
77
+ prompt = "너가 얼마나 대단한지 설명해 봐"
78
+
79
+ messages = [
80
+ {"role": "user", "content": prompt}
81
+ ]
82
+ input_ids = tokenizer.apply_chat_template(
83
+ messages,
84
+ tokenize=True,
85
+ add_generation_prompt=True,
86
+ return_tensors="pt"
87
+ )
88
+
89
+ output = model.generate(
90
+ input_ids.to(model.device),
91
+ max_new_tokens=128,
92
+ do_sample=False,
93
+ )
94
+ print(tokenizer.decode(output[0]))
95
+ ```
96
+
97
+ ### Reasoning mode
98
 
99
+ The EXAONE 4.0 models have reasoning capabilities for handling complex problems. You can activate reasoning mode by using the `enable_thinking=True` argument with the tokenizer, which opens a reasoning block that starts with `<think>` tag without closing it.
100
+
101
+ ```python
102
+ messages = [
103
+ {"role": "user", "content": "Which one is bigger, 3.12 vs 3.9?"}
104
+ ]
105
+ input_ids = tokenizer.apply_chat_template(
106
+ messages,
107
+ tokenize=True,
108
+ add_generation_prompt=True,
109
+ return_tensors="pt",
110
+ enable_thinking=True,
111
+ )
112
 
113
+ output = model.generate(
114
+ input_ids.to(model.device),
115
+ max_new_tokens=128,
116
+ do_sample=True,
117
+ temperature=0.6,
118
+ top_p=0.95
119
+ )
120
+ print(tokenizer.decode(output[0]))
121
+ ```
122
 
123
+ > [!IMPORTANT]
124
+ > The model generation with reasoning mode can be affected sensitively by sampling parameters, so please refer to the [Usage Guideline](#usage-guideline) for better quality.
125
 
126
+ ### Agentic tool use
127
 
128
+ The EXAONE 4.0 models can be used as agents with their tool calling capabilities. You can provide tool schemas to the model for effective tool calling.
129
 
130
+ ```python
131
+ import random
132
 
133
+ def roll_dice(max_num: int):
134
+ return random.randint(1, max_num)
135
 
136
+ tools = [
137
+ {
138
+ "type": "function",
139
+ "function": {
140
+ "name": "roll_dice",
141
+ "description": "Roll a dice with the number 1 to N. User can select the number N.",
142
+ "parameters": {
143
+ "type": "object",
144
+ "required": ["max_num"],
145
+ "properties": {
146
+ "max_num": {
147
+ "type": "int",
148
+ "description": "Max number of the dice"
149
+ }
150
+ }
151
+ }
152
+ }
153
+ }
154
+ ]
155
+
156
+ messages = [
157
+ {"role": "user", "content": "Roll D6 dice twice!"}
158
+ ]
159
+ input_ids = tokenizer.apply_chat_template(
160
+ messages,
161
+ tokenize=True,
162
+ add_generation_prompt=True,
163
+ return_tensors="pt",
164
+ tools=tools,
165
+ )
166
+
167
+ output = model.generate(
168
+ input_ids.to(model.device),
169
+ max_new_tokens=1024,
170
+ do_sample=True,
171
+ temperature=0.6,
172
+ top_p=0.95,
173
+ )
174
+ print(tokenizer.decode(output[0]))
175
+ ```
176
+
177
+
178
+ ## Deployment
179
+
180
+ ### TensorRT-LLM
181
+
182
+ TensorRT-LLM officially supports EXAONE 4.0 models in the latest commits. Before it is released, you need to clone the TensorRT-LLM repository to build from source.
183
+
184
+ ```bash
185
+ git clone https://github.com/NVIDIA/TensorRT-LLM.git
186
+ ```
187
+
188
+ After cloning the repository, you need to build the source for installation. Please refer to [the official documentation](https://nvidia.github.io/TensorRT-LLM/installation/build-from-source-linux.html) for a guide to build the TensorRT-LLM environment.
189
+
190
+ You can run the TensorRT-LLM server by following steps:
191
+
192
+ 1. Write extra configuration YAML file
193
+ ```yaml
194
+ # extra_llm_api_config.yaml
195
+ kv_cache_config:
196
+ enable_block_reuse: false
197
+ ```
198
+
199
+ 2. Run server with the configuration
200
+ ```bash
201
+ trtllm-serve serve LGAI-EXAONE/EXAONE-4.0-32B --backend pytorch --extra_llm_api_options extra_llm_api_config.yaml
202
+ ```
203
+
204
+ For more details, please refer to [the documentation](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/models/core/exaone) of EXAONE from TensorRT-LLM.
205
+
206
+ ### vLLM
207
+
208
+ vLLM officially supports EXAONE 4.0 models in the version of `0.10.0`. You can run the vLLM server by following command:
209
+
210
+ ```bash
211
+ vllm serve LGAI-EXAONE/EXAONE-4.0-32B --enable-auto-tool-choice --tool-call-parser hermes --reasoning-parser deepseek_r1
212
+ ```
213
+
214
+ For more details, please refer to [the vLLM documentation](https://docs.vllm.ai/en/stable/).
215
+
216
+ > [!NOTE]
217
+ > Other inference engines including `sglang` don't support the EXAONE 4.0 officially now. We will update as soon as these libraries are updated.
218
+
219
+
220
+ ## Performance
221
+
222
+ The following tables show the evaluation results of each model, with reasoning and non-reasoning mode. The evaluation details can be found in the [technical report](https://arxiv.org/abs/2507.11407).
223
+
224
+ - denotes the model has a hybrid reasoning capability, evaluated by selecting reasoning / non-reasoning on the purpose.
225
+ - To assess Korean **practical** and **professional** knowledge, we adopt both the [KMMLU-Redux](https://huggingface.co/datasets/LGAI-EXAONE/KMMLU-Redux) and [KMMLU-Pro](https://huggingface.co/datasets/LGAI-EXAONE/KMMLU-Pro) benchmarks. Both datasets are publicly released!
226
+
227
+
228
+ ### 32B Reasoning Mode
229
+
230
+ <table>
231
+ <tr>
232
+ <th> </th>
233
+ <th>EXAONE 4.0 32B </th>
234
+ <th>Phi 4 reasoning-plus</th>
235
+ <th>Magistral Small-2506</th>
236
+ <th>Qwen 3 32B </th>
237
+ <th>Qwen 3 235B </th>
238
+ <th>DeepSeek R1-0528</th>
239
+ </tr>
240
+ <tr>
241
+ <td align="center">Model Size</td>
242
+ <td align="center">32.0B</td>
243
+ <td align="center">14.7B</td>
244
+ <td align="center">23.6B</td>
245
+ <td align="center">32.8B</td>
246
+ <td align="center">235B</td>
247
+ <td align="center">671B</td>
248
+ </tr>
249
+ <tr>
250
+ <td align="center">Hybrid Reasoning</td>
251
+ <td align="center">✅</td>
252
+ <td align="center"> </td>
253
+ <td align="center"> </td>
254
+ <td align="center">✅</td>
255
+ <td align="center">✅</td>
256
+ <td align="center"> </td>
257
+ </tr>
258
+ <tr>
259
+ <td align="center" colspan='7'><i>World Knowledge</i></td>
260
+ </tr>
261
+ <tr>
262
+ <td >MMLU-Redux</td>
263
+ <td align="center">92.3</td>
264
+ <td align="center">90.8</td>
265
+ <td align="center">86.8</td>
266
+ <td align="center">90.9</td>
267
+ <td align="center">92.7</td>
268
+ <td align="center">93.4</td>
269
+ </tr>
270
+ <tr>
271
+ <td >MMLU-Pro</td>
272
+ <td align="center">81.8</td>
273
+ <td align="center">76.0</td>
274
+ <td align="center">73.4</td>
275
+ <td align="center">80.0</td>
276
+ <td align="center">83.0</td>
277
+ <td align="center">85.0</td>
278
+ </tr>
279
+ <tr>
280
+ <td >GPQA-Diamond</td>
281
+ <td align="center">75.4</td>
282
+ <td align="center">68.9</td>
283
+ <td align="center">68.2</td>
284
+ <td align="center">68.4</td>
285
+ <td align="center">71.1</td>
286
+ <td align="center">81.0</td>
287
+ </tr>
288
+ <tr>
289
+ <td align="center" colspan='7'><i>Math/Coding</i></td>
290
+ </tr>
291
+ <tr>
292
+ <td >AIME 2025</td>
293
+ <td align="center">85.3</td>
294
+ <td align="center">78.0</td>
295
+ <td align="center">62.8</td>
296
+ <td align="center">72.9</td>
297
+ <td align="center">81.5</td>
298
+ <td align="center">87.5</td>
299
+ </tr>
300
+ <tr>
301
+ <td >HMMT Feb 2025</td>
302
+ <td align="center">72.9</td>
303
+ <td align="center">53.6</td>
304
+ <td align="center">43.5</td>
305
+ <td align="center">50.4</td>
306
+ <td align="center">62.5</td>
307
+ <td align="center">79.4</td>
308
+ </tr>
309
+ <tr>
310
+ <td >LiveCodeBench v5</td>
311
+ <td align="center">72.6</td>
312
+ <td align="center">51.7</td>
313
+ <td align="center">55.8</td>
314
+ <td align="center">65.7</td>
315
+ <td align="center">70.7</td>
316
+ <td align="center">75.2</td>
317
+ </tr>
318
+ <tr>
319
+ <td >LiveCodeBench v6</td>
320
+ <td align="center">66.7</td>
321
+ <td align="center">47.1</td>
322
+ <td align="center">47.4</td>
323
+ <td align="center">60.1</td>
324
+ <td align="center">58.9</td>
325
+ <td align="center">70.3</td>
326
+ </tr>
327
+ <tr>
328
+ <td align="center" colspan='7'><i>Instruction Following</i></td>
329
+ </tr>
330
+ <tr>
331
+ <td >IFEval</td>
332
+ <td align="center">83.7</td>
333
+ <td align="center">84.9</td>
334
+ <td align="center">37.9</td>
335
+ <td align="center">85.0</td>
336
+ <td align="center">83.4</td>
337
+ <td align="center">80.8</td>
338
+ </tr>
339
+ <tr>
340
+ <td >Multi-IF (EN)</td>
341
+ <td align="center">73.5</td>
342
+ <td align="center">56.1</td>
343
+ <td align="center">27.4</td>
344
+ <td align="center">73.4</td>
345
+ <td align="center">73.4</td>
346
+ <td align="center">72.0</td>
347
+ </tr>
348
+ <tr>
349
+ <td align="center" colspan='7'><i>Agentic Tool Use</i></td>
350
+ </tr>
351
+ <tr>
352
+ <td >BFCL-v3</td>
353
+ <td align="center">63.9</td>
354
+ <td align="center">N/A</td>
355
+ <td align="center">40.4</td>
356
+ <td align="center">70.3</td>
357
+ <td align="center">70.8</td>
358
+ <td align="center">64.7</td>
359
+ </tr>
360
+ <tr>
361
+ <td >Tau-Bench (Airline)</td>
362
+ <td align="center">51.5</td>
363
+ <td align="center">N/A</td>
364
+ <td align="center">38.5</td>
365
+ <td align="center">34.5</td>
366
+ <td align="center">37.5</td>
367
+ <td align="center">53.5</td>
368
+ </tr>
369
+ <tr>
370
+ <td >Tau-Bench (Retail)</td>
371
+ <td align="center">62.8</td>
372
+ <td align="center">N/A</td>
373
+ <td align="center">10.2</td>
374
+ <td align="center">55.2</td>
375
+ <td align="center">58.3</td>
376
+ <td align="center">63.9</td>
377
+ </tr>
378
+ <tr>
379
+ <td align="center" colspan='7'><i>Multilinguality</i></td>
380
+ </tr>
381
+ <tr>
382
+ <td >KMMLU-Pro</td>
383
+ <td align="center">67.7</td>
384
+ <td align="center">55.8</td>
385
+ <td align="center">51.5</td>
386
+ <td align="center">61.4</td>
387
+ <td align="center">68.1</td>
388
+ <td align="center">71.7</td>
389
+ </tr>
390
+ <tr>
391
+ <td >KMMLU-Redux</td>
392
+ <td align="center">72.7</td>
393
+ <td align="center">62.7</td>
394
+ <td align="center">54.6</td>
395
+ <td align="center">67.5</td>
396
+ <td align="center">74.5</td>
397
+ <td align="center">77.0</td>
398
+ </tr>
399
+ <tr>
400
+ <td >KSM</td>
401
+ <td align="center">87.6</td>
402
+ <td align="center">79.8</td>
403
+ <td align="center">71.9</td>
404
+ <td align="center">82.8</td>
405
+ <td align="center">86.2</td>
406
+ <td align="center">86.7</td>
407
+ </tr>
408
+ <tr>
409
+ <td >MMMLU (ES)</td>
410
+ <td align="center">85.6</td>
411
+ <td align="center">84.3</td>
412
+ <td align="center">68.9</td>
413
+ <td align="center">82.8</td>
414
+ <td align="center">86.7</td>
415
+ <td align="center">88.2</td>
416
+ </tr>
417
+ <tr>
418
+ <td >MATH500 (ES)</td>
419
+ <td align="center">95.8</td>
420
+ <td align="center">94.2</td>
421
+ <td align="center">83.5</td>
422
+ <td align="center">94.3</td>
423
+ <td align="center">95.1</td>
424
+ <td align="center">96.0</td>
425
+ </tr>
426
+ </table>
427
+
428
+ ### 32B Non-Reasoning Mode
429
+
430
+ <table>
431
+ <tr>
432
+ <th> </th>
433
+ <th>EXAONE 4.0 32B </th>
434
+ <th>Phi 4</th>
435
+ <th>Mistral-Small-2506</th>
436
+ <th>Gemma3 27B</th>
437
+ <th>Qwen3 32B </th>
438
+ <th>Qwen3 235B </th>
439
+ <th>Llama-4-Maverick</th>
440
+ <th>DeepSeek V3-0324</th>
441
+ </tr>
442
+ <tr>
443
+ <td align="center">Model Size</td>
444
+ <td align="center">32.0B</td>
445
+ <td align="center">14.7B</td>
446
+ <td align="center">24.0B</td>
447
+ <td align="center">27.4B</td>
448
+ <td align="center">32.8B</td>
449
+ <td align="center">235B</td>
450
+ <td align="center">402B</td>
451
+ <td align="center">671B</td>
452
+ </tr>
453
+ <tr>
454
+ <td align="center">Hybrid Reasoning</td>
455
+ <td align="center">✅</td>
456
+ <td align="center"> </td>
457
+ <td align="center"> </td>
458
+ <td align="center"> </td>
459
+ <td align="center">✅</td>
460
+ <td align="center">✅</td>
461
+ <td align="center"> </td>
462
+ <td align="center"> </td>
463
+ </tr>
464
+ <tr>
465
+ <td align="center" colspan='9'><i>World Knowledge</i></td>
466
+ </tr>
467
+ <tr>
468
+ <td >MMLU-Redux</td>
469
+ <td align="center">89.8</td>
470
+ <td align="center">88.3</td>
471
+ <td align="center">85.9</td>
472
+ <td align="center">85.0</td>
473
+ <td align="center">85.7</td>
474
+ <td align="center">89.2</td>
475
+ <td align="center">92.3</td>
476
+ <td align="center">92.3</td>
477
+ </tr>
478
+ <tr>
479
+ <td >MMLU-Pro</td>
480
+ <td align="center">77.6</td>
481
+ <td align="center">70.4</td>
482
+ <td align="center">69.1</td>
483
+ <td align="center">67.5</td>
484
+ <td align="center">74.4</td>
485
+ <td align="center">77.4</td>
486
+ <td align="center">80.5</td>
487
+ <td align="center">81.2</td>
488
+ </tr>
489
+ <tr>
490
+ <td >GPQA-Diamond</td>
491
+ <td align="center">63.7</td>
492
+ <td align="center">56.1</td>
493
+ <td align="center">46.1</td>
494
+ <td align="center">42.4</td>
495
+ <td align="center">54.6</td>
496
+ <td align="center">62.9</td>
497
+ <td align="center">69.8</td>
498
+ <td align="center">68.4</td>
499
+ </tr>
500
+ <tr>
501
+ <td align="center" colspan='9'><i>Math/Coding</i></td>
502
+ </tr>
503
+ <tr>
504
+ <td >AIME 2025</td>
505
+ <td align="center">35.9</td>
506
+ <td align="center">17.8</td>
507
+ <td align="center">30.2</td>
508
+ <td align="center">23.8</td>
509
+ <td align="center">20.2</td>
510
+ <td align="center">24.7</td>
511
+ <td align="center">18.0</td>
512
+ <td align="center">50.0</td>
513
+ </tr>
514
+ <tr>
515
+ <td >HMMT Feb 2025</td>
516
+ <td align="center">21.8</td>
517
+ <td align="center">4.0</td>
518
+ <td align="center">16.9</td>
519
+ <td align="center">10.3</td>
520
+ <td align="center">9.8</td>
521
+ <td align="center">11.9</td>
522
+ <td align="center">7.3</td>
523
+ <td align="center">29.2</td>
524
+ </tr>
525
+ <tr>
526
+ <td >LiveCodeBench v5</td>
527
+ <td align="center">43.3</td>
528
+ <td align="center">24.6</td>
529
+ <td align="center">25.8</td>
530
+ <td align="center">27.5</td>
531
+ <td align="center">31.3</td>
532
+ <td align="center">35.3</td>
533
+ <td align="center">43.4</td>
534
+ <td align="center">46.7</td>
535
+ </tr>
536
+ <tr>
537
+ <td >LiveCodeBench v6</td>
538
+ <td align="center">43.1</td>
539
+ <td align="center">27.4</td>
540
+ <td align="center">26.9</td>
541
+ <td align="center">29.7</td>
542
+ <td align="center">28.0</td>
543
+ <td align="center">31.4</td>
544
+ <td align="center">32.7</td>
545
+ <td align="center">44.0</td>
546
+ </tr>
547
+ <tr>
548
+ <td align="center" colspan='9'><i>Instruction Following</i></td>
549
+ </tr>
550
+ <tr>
551
+ <td >IFEval</td>
552
+ <td align="center">84.8</td>
553
+ <td align="center">63.0</td>
554
+ <td align="center">77.8</td>
555
+ <td align="center">82.6</td>
556
+ <td align="center">83.2</td>
557
+ <td align="center">83.2</td>
558
+ <td align="center">85.4</td>
559
+ <td align="center">81.2</td>
560
+ </tr>
561
+ <tr>
562
+ <td >Multi-IF (EN)</td>
563
+ <td align="center">71.6</td>
564
+ <td align="center">47.7</td>
565
+ <td align="center">63.2</td>
566
+ <td align="center">72.1</td>
567
+ <td align="center">71.9</td>
568
+ <td align="center">72.5</td>
569
+ <td align="center">77.9</td>
570
+ <td align="center">68.3</td>
571
+ </tr>
572
+ <tr>
573
+ <td align="center" colspan='9'><i>Long Context</i></td>
574
+ </tr>
575
+ <tr>
576
+ <td >HELMET</td>
577
+ <td align="center">58.3</td>
578
+ <td align="center">N/A</td>
579
+ <td align="center">61.9</td>
580
+ <td align="center">58.3</td>
581
+ <td align="center">54.5</td>
582
+ <td align="center">63.3</td>
583
+ <td align="center">13.7</td>
584
+ <td align="center">N/A</td>
585
+ </tr>
586
+ <tr>
587
+ <td >RULER</td>
588
+ <td align="center">88.2</td>
589
+ <td align="center">N/A</td>
590
+ <td align="center">71.8</td>
591
+ <td align="center">66.0</td>
592
+ <td align="center">85.6</td>
593
+ <td align="center">90.6</td>
594
+ <td align="center">2.9</td>
595
+ <td align="center">N/A</td>
596
+ </tr>
597
+ <tr>
598
+ <td >LongBench v1</td>
599
+ <td align="center">48.1</td>
600
+ <td align="center">N/A</td>
601
+ <td align="center">51.5</td>
602
+ <td align="center">51.5</td>
603
+ <td align="center">44.2</td>
604
+ <td align="center">45.3</td>
605
+ <td align="center">34.7</td>
606
+ <td align="center">N/A</td>
607
+ </tr>
608
+ <tr>
609
+ <td align="center" colspan='9'><i>Agentic Tool Use</i></td>
610
+ </tr>
611
+ <tr>
612
+ <td >BFCL-v3</td>
613
+ <td align="center">65.2</td>
614
+ <td align="center">N/A</td>
615
+ <td align="center">57.7</td>
616
+ <td align="center">N/A</td>
617
+ <td align="center">63.0</td>
618
+ <td align="center">68.0</td>
619
+ <td align="center">52.9</td>
620
+ <td align="center">63.8</td>
621
+ </tr>
622
+ <tr>
623
+ <td >Tau-Bench (Airline)</td>
624
+ <td align="center">25.5</td>
625
+ <td align="center">N/A</td>
626
+ <td align="center">36.1</td>
627
+ <td align="center">N/A</td>
628
+ <td align="center">16.0</td>
629
+ <td align="center">27.0</td>
630
+ <td align="center">38.0</td>
631
+ <td align="center">40.5</td>
632
+ </tr>
633
+ <tr>
634
+ <td >Tau-Bench (Retail)</td>
635
+ <td align="center">55.9</td>
636
+ <td align="center">N/A</td>
637
+ <td align="center">35.5</td>
638
+ <td align="center">N/A</td>
639
+ <td align="center">47.6</td>
640
+ <td align="center">56.5</td>
641
+ <td align="center">6.5</td>
642
+ <td align="center">68.5</td>
643
+ </tr>
644
+ <tr>
645
+ <td align="center" colspan='9'><i>Multilinguality</i></td>
646
+ </tr>
647
+ <tr>
648
+ <td >KMMLU-Pro</td>
649
+ <td align="center">60.0</td>
650
+ <td align="center">44.8</td>
651
+ <td align="center">51.0</td>
652
+ <td align="center">50.7</td>
653
+ <td align="center">58.3</td>
654
+ <td align="center">64.4</td>
655
+ <td align="center">68.8</td>
656
+ <td align="center">67.3</td>
657
+ </tr>
658
+ <tr>
659
+ <td >KMMLU-Redux</td>
660
+ <td align="center">64.8</td>
661
+ <td align="center">50.1</td>
662
+ <td align="center">53.6</td>
663
+ <td align="center">53.3</td>
664
+ <td align="center">64.4</td>
665
+ <td align="center">71.7</td>
666
+ <td align="center">76.9</td>
667
+ <td align="center">72.2</td>
668
+ </tr>
669
+ <tr>
670
+ <td >KSM</td>
671
+ <td align="center">59.8</td>
672
+ <td align="center">29.1</td>
673
+ <td align="center">35.5</td>
674
+ <td align="center">36.1</td>
675
+ <td align="center">41.3</td>
676
+ <td align="center">46.6</td>
677
+ <td align="center">40.6</td>
678
+ <td align="center">63.5</td>
679
+ </tr>
680
+ <tr>
681
+ <td >Ko-LongBench</td>
682
+ <td align="center">76.9</td>
683
+ <td align="center">N/A</td>
684
+ <td align="center">55.4</td>
685
+ <td align="center">72.0</td>
686
+ <td align="center">73.9</td>
687
+ <td align="center">74.6</td>
688
+ <td align="center">65.6</td>
689
+ <td align="center">N/A</td>
690
+ </tr>
691
+ <tr>
692
+ <td >MMMLU (ES)</td>
693
+ <td align="center">80.6</td>
694
+ <td align="center">81.2</td>
695
+ <td align="center">78.4</td>
696
+ <td align="center">78.7</td>
697
+ <td align="center">82.1</td>
698
+ <td align="center">83.7</td>
699
+ <td align="center">86.9</td>
700
+ <td align="center">86.7</td>
701
+ </tr>
702
+ <tr>
703
+ <td >MATH500 (ES)</td>
704
+ <td align="center">87.3</td>
705
+ <td align="center">78.2</td>
706
+ <td align="center">83.4</td>
707
+ <td align="center">86.8</td>
708
+ <td align="center">84.7</td>
709
+ <td align="center">87.2</td>
710
+ <td align="center">78.7</td>
711
+ <td align="center">89.2</td>
712
+ </tr>
713
+ <tr>
714
+ <td >WMT24++ (ES)</td>
715
+ <td align="center">90.7</td>
716
+ <td align="center">89.3</td>
717
+ <td align="center">92.2</td>
718
+ <td align="center">93.1</td>
719
+ <td align="center">91.4</td>
720
+ <td align="center">92.9</td>
721
+ <td align="center">92.7</td>
722
+ <td align="center">94.3 </td>
723
+ </tr>
724
+ </table>
725
+
726
+ ### 1.2B Reasoning Mode
727
+
728
+ <table>
729
+ <tr>
730
+ <th> </th>
731
+ <th>EXAONE 4.0 1.2B </th>
732
+ <th>EXAONE Deep 2.4B</th>
733
+ <th>Qwen 3 0.6B </th>
734
+ <th>Qwen 3 1.7B </th>
735
+ <th>SmolLM 3 3B </th>
736
+ </tr>
737
+ <tr>
738
+ <td align="center">Model Size</td>
739
+ <td align="center">1.28B</td>
740
+ <td align="center">2.41B</td>
741
+ <td align="center">596M</td>
742
+ <td align="center">1.72B</td>
743
+ <td align="center">3.08B</td>
744
+ </tr>
745
+ <tr>
746
+ <td align="center">Hybrid Reasoning</td>
747
+ <td align="center">✅</td>
748
+ <td align="center"> </td>
749
+ <td align="center">✅</td>
750
+ <td align="center">✅</td>
751
+ <td align="center">✅</td>
752
+ </tr>
753
+ <tr>
754
+ <td align="center" colspan='6'><i>World Knowledge</i></td>
755
+ </tr>
756
+ <tr>
757
+ <td >MMLU-Redux</td>
758
+ <td align="center">71.5</td>
759
+ <td align="center">68.9</td>
760
+ <td align="center">55.6</td>
761
+ <td align="center">73.9</td>
762
+ <td align="center">74.8</td>
763
+ </tr>
764
+ <tr>
765
+ <td >MMLU-Pro</td>
766
+ <td align="center">59.3</td>
767
+ <td align="center">56.4</td>
768
+ <td align="center">38.3</td>
769
+ <td align="center">57.7</td>
770
+ <td align="center">57.8</td>
771
+ </tr>
772
+ <tr>
773
+ <td >GPQA-Diamond</td>
774
+ <td align="center">52.0</td>
775
+ <td align="center">54.3</td>
776
+ <td align="center">27.9</td>
777
+ <td align="center">40.1</td>
778
+ <td align="center">41.7</td>
779
+ </tr>
780
+ <tr>
781
+ <td align="center" colspan='6'><i>Math/Coding</i></td>
782
+ </tr>
783
+ <tr>
784
+ <td >AIME 2025</td>
785
+ <td align="center">45.2</td>
786
+ <td align="center">47.9</td>
787
+ <td align="center">15.1</td>
788
+ <td align="center">36.8</td>
789
+ <td align="center">36.7</td>
790
+ </tr>
791
+ <tr>
792
+ <td >HMMT Feb 2025</td>
793
+ <td align="center">34.0</td>
794
+ <td align="center">27.3</td>
795
+ <td align="center">7.0</td>
796
+ <td align="center">21.8</td>
797
+ <td align="center">26.0</td>
798
+ </tr>
799
+ <tr>
800
+ <td >LiveCodeBench v5</td>
801
+ <td align="center">44.6</td>
802
+ <td align="center">47.2</td>
803
+ <td align="center">12.3</td>
804
+ <td align="center">33.2</td>
805
+ <td align="center">27.6</td>
806
+ </tr>
807
+ <tr>
808
+ <td >LiveCodeBench v6</td>
809
+ <td align="center">45.3</td>
810
+ <td align="center">43.1</td>
811
+ <td align="center">16.4</td>
812
+ <td align="center">29.9</td>
813
+ <td align="center">29.1</td>
814
+ </tr>
815
+ <tr>
816
+ <td align="center" colspan='6'><i>Instruction Following</i></td>
817
+ </tr>
818
+ <tr>
819
+ <td >IFEval</td>
820
+ <td align="center">67.8</td>
821
+ <td align="center">71.0</td>
822
+ <td align="center">59.2</td>
823
+ <td align="center">72.5</td>
824
+ <td align="center">71.2</td>
825
+ </tr>
826
+ <tr>
827
+ <td >Multi-IF (EN)</td>
828
+ <td align="center">53.9</td>
829
+ <td align="center">54.5</td>
830
+ <td align="center">37.5</td>
831
+ <td align="center">53.5</td>
832
+ <td align="center">47.5</td>
833
+ </tr>
834
+ <tr>
835
+ <td align="center" colspan='6'><i>Agentic Tool Use</i></td>
836
+ </tr>
837
+ <tr>
838
+ <td >BFCL-v3</td>
839
+ <td align="center">52.9</td>
840
+ <td align="center">N/A</td>
841
+ <td align="center">46.4</td>
842
+ <td align="center">56.6</td>
843
+ <td align="center">37.1</td>
844
+ </tr>
845
+ <tr>
846
+ <td >Tau-Bench (Airline)</td>
847
+ <td align="center">20.5</td>
848
+ <td align="center">N/A</td>
849
+ <td align="center">22.0</td>
850
+ <td align="center">31.0</td>
851
+ <td align="center">37.0</td>
852
+ </tr>
853
+ <tr>
854
+ <td >Tau-Bench (Retail)</td>
855
+ <td align="center">28.1</td>
856
+ <td align="center">N/A</td>
857
+ <td align="center">3.3</td>
858
+ <td align="center">6.5</td>
859
+ <td align="center">5.4</td>
860
+ </tr>
861
+ <tr>
862
+ <td align="center" colspan='6'><i>Multilinguality</i></td>
863
+ </tr>
864
+ <tr>
865
+ <td >KMMLU-Pro</td>
866
+ <td align="center">42.7</td>
867
+ <td align="center">24.6</td>
868
+ <td align="center">21.6</td>
869
+ <td align="center">38.3</td>
870
+ <td align="center">30.5</td>
871
+ </tr>
872
+ <tr>
873
+ <td >KMMLU-Redux</td>
874
+ <td align="center">46.9</td>
875
+ <td align="center">25.0</td>
876
+ <td align="center">24.5</td>
877
+ <td align="center">38.0</td>
878
+ <td align="center">33.7</td>
879
+ </tr>
880
+ <tr>
881
+ <td >KSM</td>
882
+ <td align="center">60.6</td>
883
+ <td align="center">60.9</td>
884
+ <td align="center">22.8</td>
885
+ <td align="center">52.9</td>
886
+ <td align="center">49.7</td>
887
+ </tr>
888
+ <tr>
889
+ <td >MMMLU (ES)</td>
890
+ <td align="center">62.4</td>
891
+ <td align="center">51.4</td>
892
+ <td align="center">48.8</td>
893
+ <td align="center">64.5</td>
894
+ <td align="center">64.7</td>
895
+ </tr>
896
+ <tr>
897
+ <td >MATH500 (ES)</td>
898
+ <td align="center">88.8</td>
899
+ <td align="center">84.5</td>
900
+ <td align="center">70.6</td>
901
+ <td align="center">87.9</td>
902
+ <td align="center">87.5 </td>
903
+ </tr>
904
+ </table>
905
+
906
+ ### 1.2B Non-Reasoning Mode
907
+
908
+ <table>
909
+ <tr>
910
+ <th> </th>
911
+ <th>EXAONE 4.0 1.2B </th>
912
+ <th>Qwen 3 0.6B </th>
913
+ <th>Gemma 3 1B</th>
914
+ <th>Qwen 3 1.7B </th>
915
+ <th>SmolLM 3 3B </th>
916
+ </tr>
917
+ <tr>
918
+ <td align="center">Model Size</td>
919
+ <td align="center">1.28B</td>
920
+ <td align="center">596M</td>
921
+ <td align="center">1.00B</td>
922
+ <td align="center">1.72B</td>
923
+ <td align="center">3.08B</td>
924
+ </tr>
925
+ <tr>
926
+ <td align="center">Hybrid Reasoning</td>
927
+ <td align="center">✅</td>
928
+ <td align="center">✅</td>
929
+ <td align="center"> </td>
930
+ <td align="center">✅</td>
931
+ <td align="center">✅</td>
932
+ </tr>
933
+ <tr>
934
+ <td align="center" colspan='6'><i>World Knowledge</i></td>
935
+ </tr>
936
+ <tr>
937
+ <td >MMLU-Redux</td>
938
+ <td align="center">66.9</td>
939
+ <td align="center">44.6</td>
940
+ <td align="center">40.9</td>
941
+ <td align="center">63.4</td>
942
+ <td align="center">65.0</td>
943
+ </tr>
944
+ <tr>
945
+ <td >MMLU-Pro</td>
946
+ <td align="center">52.0</td>
947
+ <td align="center">26.6</td>
948
+ <td align="center">14.7</td>
949
+ <td align="center">43.7</td>
950
+ <td align="center">43.6</td>
951
+ </tr>
952
+ <tr>
953
+ <td >GPQA-Diamond</td>
954
+ <td align="center">40.1</td>
955
+ <td align="center">22.9</td>
956
+ <td align="center">19.2</td>
957
+ <td align="center">28.6</td>
958
+ <td align="center">35.7</td>
959
+ </tr>
960
+ <tr>
961
+ <td align="center" colspan='6'><i>Math/Coding</i></td>
962
+ </tr>
963
+ <tr>
964
+ <td >AIME 2025</td>
965
+ <td align="center">23.5</td>
966
+ <td align="center">2.6</td>
967
+ <td align="center">2.1</td>
968
+ <td align="center">9.8</td>
969
+ <td align="center">9.3</td>
970
+ </tr>
971
+ <tr>
972
+ <td >HMMT Feb 2025</td>
973
+ <td align="center">13.0</td>
974
+ <td align="center">1.0</td>
975
+ <td align="center">1.5</td>
976
+ <td align="center">5.1</td>
977
+ <td align="center">4.7</td>
978
+ </tr>
979
+ <tr>
980
+ <td >LiveCodeBench v5</td>
981
+ <td align="center">26.4</td>
982
+ <td align="center">3.6</td>
983
+ <td align="center">1.8</td>
984
+ <td align="center">11.6</td>
985
+ <td align="center">11.4</td>
986
+ </tr>
987
+ <tr>
988
+ <td >LiveCodeBench v6</td>
989
+ <td align="center">30.1</td>
990
+ <td align="center">6.9</td>
991
+ <td align="center">2.3</td>
992
+ <td align="center">16.6</td>
993
+ <td align="center">20.6</td>
994
+ </tr>
995
+ <tr>
996
+ <td align="center" colspan='6'><i>Instruction Following</i></td>
997
+ </tr>
998
+ <tr>
999
+ <td >IFEval</td>
1000
+ <td align="center">74.7</td>
1001
+ <td align="center">54.5</td>
1002
+ <td align="center">80.2</td>
1003
+ <td align="center">68.2</td>
1004
+ <td align="center">76.7</td>
1005
+ </tr>
1006
+ <tr>
1007
+ <td >Multi-IF (EN)</td>
1008
+ <td align="center">62.1</td>
1009
+ <td align="center">37.5</td>
1010
+ <td align="center">32.5</td>
1011
+ <td align="center">51.0</td>
1012
+ <td align="center">51.9</td>
1013
+ </tr>
1014
+ <tr>
1015
+ <td align="center" colspan='6'><i>Long Context</i></td>
1016
+ </tr>
1017
+ <tr>
1018
+ <td >HELMET</td>
1019
+ <td align="center">41.2</td>
1020
+ <td align="center">21.1</td>
1021
+ <td align="center">N/A</td>
1022
+ <td align="center">33.8</td>
1023
+ <td align="center">38.6</td>
1024
+ </tr>
1025
+ <tr>
1026
+ <td >RULER</td>
1027
+ <td align="center">77.4</td>
1028
+ <td align="center">55.1</td>
1029
+ <td align="center">N/A</td>
1030
+ <td align="center">65.9</td>
1031
+ <td align="center">66.3</td>
1032
+ </tr>
1033
+ <tr>
1034
+ <td >LongBench v1</td>
1035
+ <td align="center">36.9</td>
1036
+ <td align="center">32.4</td>
1037
+ <td align="center">N/A</td>
1038
+ <td align="center">41.9</td>
1039
+ <td align="center">39.9</td>
1040
+ </tr>
1041
+ <tr>
1042
+ <td align="center" colspan='6'><i>Agentic Tool Use</i></td>
1043
+ </tr>
1044
+ <tr>
1045
+ <td >BFCL-v3</td>
1046
+ <td align="center">55.7</td>
1047
+ <td align="center">44.1</td>
1048
+ <td align="center">N/A</td>
1049
+ <td align="center">52.2</td>
1050
+ <td align="center">47.3</td>
1051
+ </tr>
1052
+ <tr>
1053
+ <td >Tau-Bench (Airline)</td>
1054
+ <td align="center">10.0</td>
1055
+ <td align="center">31.5</td>
1056
+ <td align="center">N/A</td>
1057
+ <td align="center">13.5</td>
1058
+ <td align="center">38.0</td>
1059
+ </tr>
1060
+ <tr>
1061
+ <td >Tau-Bench (Retail)</td>
1062
+ <td align="center">21.7</td>
1063
+ <td align="center">5.7</td>
1064
+ <td align="center">N/A</td>
1065
+ <td align="center">4.6</td>
1066
+ <td align="center">6.7</td>
1067
+ </tr>
1068
+ <tr>
1069
+ <td align="center" colspan='6'><i>Multilinguality</i></td>
1070
+ </tr>
1071
+ <tr>
1072
+ <td >KMMLU-Pro</td>
1073
+ <td align="center">37.5</td>
1074
+ <td align="center">24.6</td>
1075
+ <td align="center">9.7</td>
1076
+ <td align="center">29.5</td>
1077
+ <td align="center">27.6</td>
1078
+ </tr>
1079
+ <tr>
1080
+ <td >KMMLU-Redux</td>
1081
+ <td align="center">40.4</td>
1082
+ <td align="center">22.8</td>
1083
+ <td align="center">19.4</td>
1084
+ <td align="center">29.8</td>
1085
+ <td align="center">26.4</td>
1086
+ </tr>
1087
+ <tr>
1088
+ <td >KSM</td>
1089
+ <td align="center">26.3</td>
1090
+ <td align="center">0.1</td>
1091
+ <td align="center">22.8</td>
1092
+ <td align="center">16.3</td>
1093
+ <td align="center">16.1</td>
1094
+ </tr>
1095
+ <tr>
1096
+ <td >Ko-LongBench</td>
1097
+ <td align="center">69.8</td>
1098
+ <td align="center">16.4</td>
1099
+ <td align="center">N/A</td>
1100
+ <td align="center">57.1</td>
1101
+ <td align="center">15.7</td>
1102
+ </tr>
1103
+ <tr>
1104
+ <td >MMMLU (ES)</td>
1105
+ <td align="center">54.6</td>
1106
+ <td align="center">39.5</td>
1107
+ <td align="center">35.9</td>
1108
+ <td align="center">54.3</td>
1109
+ <td align="center">55.1</td>
1110
+ </tr>
1111
+ <tr>
1112
+ <td >MATH500 (ES)</td>
1113
+ <td align="center">71.2</td>
1114
+ <td align="center">38.5</td>
1115
+ <td align="center">41.2</td>
1116
+ <td align="center">66.0</td>
1117
+ <td align="center">62.4</td>
1118
+ </tr>
1119
+ <tr>
1120
+ <td >WMT24++ (ES)</td>
1121
+ <td align="center">65.9</td>
1122
+ <td align="center">58.2</td>
1123
+ <td align="center">76.9</td>
1124
+ <td align="center">76.7</td>
1125
+ <td align="center">84.0 </td>
1126
+ </tr>
1127
+ </table>
1128
+
1129
+
1130
+
1131
+ ## Usage Guideline
1132
+
1133
+ > [!IMPORTANT]
1134
+ > To achieve the expected performance, we recommend using the following configurations:
1135
+ >
1136
+ > - For non-reasoning mode, we recommend using a lower temperature value such as `temperature<0.6` for better performance.
1137
+ > - For reasoning mode (using `<think>` block), we recommend using `temperature=0.6` and `top_p=0.95`.
1138
+ > - If you suffer from the model degeneration, we recommend using `presence_penalty=1.5`.
1139
+ > - For Korean general conversation with 1.2B model, we suggest to use `temperature=0.1` to avoid code switching.
1140
+
1141
+
1142
+ ## Limitation
1143
+
1144
+ The EXAONE language model has certain limitations and may occasionally generate inappropriate responses. The language model generates responses based on the output probability of tokens, and it is determined during learning from training data. While we have made every effort to exclude personal, harmful, and biased information from the training data, some problematic content may still be included, potentially leading to undesirable responses. Please note that the text generated by EXAONE language model does not reflect the views of LG AI Research.
1145
+
1146
+ - Inappropriate answers may be generated, which contain personal, harmful or other inappropriate information.
1147
+ - Biased responses may be generated, which are associated with age, gender, race, and so on.
1148
+ - The generated responses rely heavily on statistics from the training data, which can result in the generation of
1149
+ semantically or syntactically incorrect sentences.
1150
+ - Since the model does not reflect the latest information, the responses may be false or contradictory.
1151
+
1152
+ LG AI Research strives to reduce potential risks that may arise from EXAONE language models. Users are not allowed
1153
+ to engage in any malicious activities (e.g., keying in illegal information) that may induce the creation of inappropriate
1154
+ outputs violating LG AI's ethical principles when using EXAONE language models.
1155
+
1156
+
1157
+ ## License
1158
+
1159
+ The model is licensed under [EXAONE AI Model License Agreement 1.2 - NC](./LICENSE)
1160
+
1161
+ > [!NOTE]
1162
+ > The main difference from the older version is as below:
1163
+ > - We removed **the claim of model output ownership** from the license.
1164
+ > - We restrict the model use **against the development of models that compete with EXAONE**.
1165
+ > - We allow the model to be used for **educational purposes**, not just research.
1166
+
1167
+
1168
+ ## Citation
1169
+
1170
+ ```
1171
+ @article{exaone-4.0,
1172
+ title={EXAONE 4.0: Unified Large Language Models Integrating Non-reasoning and Reasoning Modes},
1173
+ author={{LG AI Research}},
1174
+ journal={arXiv preprint arXiv:2507.11407},
1175
+ year={2025}
1176
+ }
1177
+ ```
1178
+
1179
+
1180
+ ## Contact
1181
+
1182
+ LG AI Research Technical Support: contact_us@lgresearch.ai