Bingsu commited on
Commit
8c256d9
·
verified ·
1 Parent(s): 7b492ea

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +576 -0
README.md ADDED
@@ -0,0 +1,576 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ - ko
6
+ tags:
7
+ - KT
8
+ - K-intelligence
9
+ - Mi:dm
10
+ inference: true
11
+ pipeline_tag: text-generation
12
+ library_name: transformers
13
+ base_model:
14
+ - K-intelligence/Midm-2.0-Base-Instruct
15
+ ---
16
+
17
+
18
+ <p align="center">
19
+ <br>
20
+ <span style="font-size: 60px; font-weight: bold;">Mi:dm 2.0 Base</span>
21
+ </br>
22
+ </p>
23
+
24
+ <p align="center">
25
+ 🤗 <a href="https://huggingface.co/collections/K-intelligence/mi-dm-20-6866406c301e5f45a6926af8">Mi:dm 2.0 Models</a> |
26
+ 📜 <a href="https://github.com/K-intelligence-Midm/Midm-2.0/blob/main/Mi_dm2_0_technical_report.pdf">Mi:dm 2.0 Technical Report</a> |
27
+ 📕 Mi:dm 2.0 Technical Blog*
28
+ </p>
29
+
30
+ <p align="center"><sub>*To be released soon</sub></p>
31
+
32
+ <br>
33
+
34
+ # News 📢
35
+
36
+ - 🔜 _(Coming Soon!) GGUF format model files will be available soon for easier local deployment._
37
+ - ⚡️`2025/07/04`: Released Mi:dm 2.0 Model collection on Hugging Face🤗.
38
+ <br>
39
+ <br>
40
+ # Table of Contents
41
+
42
+ - ___Overview___
43
+ - [Mi:dm 2.0](#midm-20)
44
+ - [Quickstart](#quickstart)
45
+ - [Evaluation](#evaluation)
46
+ - ___Usage___
47
+ - [Run on Friendli.AI](#run-on-friendliai)
48
+ - [Run on Your Local Machine](#run-on-your-local-machine)
49
+ - [Deployment](#deployment)
50
+ - [Tutorials](#tutorials)
51
+ - ___More Information___
52
+ - [Limitation](#limitation)
53
+ - [License](#license)
54
+ - [Contact](#contact)
55
+
56
+ <br>
57
+ <br>
58
+
59
+ # Overview
60
+
61
+ ### Mi:dm 2.0
62
+
63
+ **Mi:dm 2.0** is a __"Korea-centric AI"__ model developed using KT's proprietary technology. The term __"Korea-centric AI"__ refers to a model that deeply internalizes the unique values, cognitive frameworks, and commonsense reasoning inherent to Korean society. It goes beyond simply processing or generating Korean text—it reflects a deeper understanding of the socio-cultural norms and values that define Korean society.
64
+
65
+ Mi:dm 2.0 is released in two versions:
66
+
67
+ - **Mi:dm 2.0 Base**
68
+ An 11.5B parameter dense model designed to balance model size and performance.
69
+ It extends an 8B-scale model by applying the Depth-up Scaling (DuS) method, making it suitable for real-world applications that require both performance and versatility.
70
+
71
+ - **Mi:dm 2.0 Mini**
72
+ A lightweight 2.3B parameter dense model optimized for on-device environments and systems with limited GPU resources.
73
+ It was derived from the Base model through pruning and distillation to enable compact deployment.
74
+
75
+ > [!Note]
76
+ > Neither the pre-training nor the post-training data includes KT users' data.
77
+
78
+ <br>
79
+
80
+ ### Quickstart
81
+
82
+ Here is the code snippet to run conversational inference with the model:
83
+
84
+ ```python
85
+ import torch
86
+ from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
87
+
88
+ model_name = "K-intelligence/Midm-2.0-Base-Instruct"
89
+
90
+ model = AutoModelForCausalLM.from_pretrained(
91
+ model_name,
92
+ torch_dtype=torch.bfloat16,
93
+ trust_remote_code=True,
94
+ device_map="auto"
95
+ )
96
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
97
+ generation_config = GenerationConfig.from_pretrained(model_name)
98
+
99
+ prompt = "KT에 대해 소개해줘"
100
+
101
+ # message for inference
102
+ messages = [
103
+ {"role": "system",
104
+ "content": "Mi:dm(믿:음)은 KT에서 개발한 AI 기반 어시스턴트이다."},
105
+ {"role": "user", "content": prompt}
106
+ ]
107
+
108
+ input_ids = tokenizer.apply_chat_template(
109
+ messages,
110
+ tokenize=True,
111
+ add_generation_prompt=True,
112
+ return_tensors="pt"
113
+ )
114
+
115
+ output = model.generate(
116
+ input_ids.to("cuda"),
117
+ generation_config=generation_config,
118
+ eos_token_id=tokenizer.eos_token_id,
119
+ max_new_tokens=128,
120
+ do_sample=False,
121
+ )
122
+ print(tokenizer.decode(output[0]))
123
+ ```
124
+
125
+ > [!NOTE]
126
+ > The `transformers` library should be version `4.45.0` or higher.
127
+
128
+ <br>
129
+
130
+ # Evaluation
131
+
132
+
133
+ #### Korean
134
+
135
+ <!-- first half table-->
136
+ <table>
137
+ <tr>
138
+ <th rowspan="2">Model</th>
139
+ <th colspan="5" align="center">Society & Culture</th>
140
+ <th colspan="3" align="center">General Knowledge</th>
141
+ <th colspan="3" align="center">Instruction Following</th>
142
+ </tr>
143
+ <tr>
144
+ <th align="center">K-Refer<sup>*</sup></th>
145
+ <th align="center">K-Refer-Hard<sup>*</sup></th>
146
+ <th align="center">Ko-Sovereign<sup>*</sup></th>
147
+ <th align="center">HAERAE</th>
148
+ <th align="center">Avg.</th>
149
+ <th align="center">KMMLU</th>
150
+ <th align="center">Ko-Sovereign<sup>*</sup></th>
151
+ <th align="center">Avg.</th>
152
+ <th align="center">Ko-IFEval</th>
153
+ <th align="center">Ko-MTBench</th>
154
+ <th align="center">Avg.</th>
155
+ </tr>
156
+
157
+ <!-- Small Models -->
158
+ <tr>
159
+ <td><strong>Qwen3-4B</strong></td>
160
+ <td align="center">53.6</td>
161
+ <td align="center">42.9</td>
162
+ <td align="center">35.8</td>
163
+ <td align="center">50.6</td>
164
+ <td align="center">45.7</td>
165
+ <td align="center"><strong>50.6</strong></td>
166
+ <td align="center"><strong>42.5</strong></td>
167
+ <td align="center"><strong>46.5</strong></td>
168
+ <td align="center"><strong>75.9</strong></td>
169
+ <td align="center">63.0</td>
170
+ <td align="center">69.4</td>
171
+ </tr>
172
+ <tr>
173
+ <td><strong>Exaone-3.5-2.4B-inst</strong></td>
174
+ <td align="center">64.0</td>
175
+ <td align="center"><strong>67.1</strong></td>
176
+ <td align="center"><strong>44.4</strong></td>
177
+ <td align="center">61.3</td>
178
+ <td align="center"><strong>59.2</strong></td>
179
+ <td align="center">43.5</td>
180
+ <td align="center">42.4</td>
181
+ <td align="center">43.0</td>
182
+ <td align="center">65.4</td>
183
+ <td align="center"><strong>74.0</strong></td>
184
+ <td align="center">68.9</td>
185
+ </tr>
186
+ <tr>
187
+ <td><strong>Mi:dm 2.0-Mini-inst</strong></td>
188
+ <td align="center"><strong>66.4</strong></td>
189
+ <td align="center">61.4</td>
190
+ <td align="center">36.7</td>
191
+ <td align="center"><strong>70.8</strong></td>
192
+ <td align="center">58.8</td>
193
+ <td align="center">45.1</td>
194
+ <td align="center">42.4</td>
195
+ <td align="center">43.8</td>
196
+ <td align="center">73.3</td>
197
+ <td align="center"><strong>74.0</strong></td>
198
+ <td align="center"><strong>73.6</strong></td>
199
+ </tr>
200
+
201
+ <!-- Spacer row -->
202
+ <tr><td colspan="13"> </td></tr>
203
+
204
+ <!-- Large Models -->
205
+ <tr>
206
+ <td><strong>Qwen3-14B</strong></td>
207
+ <td align="center">72.4</td>
208
+ <td align="center">65.7</td>
209
+ <td align="center">49.8</td>
210
+ <td align="center">68.4</td>
211
+ <td align="center">64.1</td>
212
+ <td align="center">55.4</td>
213
+ <td align="center">54.7</td>
214
+ <td align="center">55.1</td>
215
+ <td align="center"><strong>83.6</strong></td>
216
+ <td align="center">71</td>
217
+ <td align="center">77.3</td>
218
+ </tr>
219
+ <tr>
220
+ <td><strong>Llama-3.1-8B-inst</strong></td>
221
+ <td align="center">43.2</td>
222
+ <td align="center">36.4</td>
223
+ <td align="center">33.8</td>
224
+ <td align="center">49.5</td>
225
+ <td align="center">40.7</td>
226
+ <td align="center">33.0</td>
227
+ <td align="center">36.7</td>
228
+ <td align="center">34.8</td>
229
+ <td align="center">60.1</td>
230
+ <td align="center">57</td>
231
+ <td align="center">58.5</td>
232
+ </tr>
233
+ <tr>
234
+ <td><strong>Exaone-3.5-7.8B-inst</strong></td>
235
+ <td align="center">71.6</td>
236
+ <td align="center">69.3</td>
237
+ <td align="center">46.9</td>
238
+ <td align="center">72.9</td>
239
+ <td align="center">65.2</td>
240
+ <td align="center">52.6</td>
241
+ <td align="center">45.6</td>
242
+ <td align="center">49.1</td>
243
+ <td align="center">69.1</td>
244
+ <td align="center">79.6</td>
245
+ <td align="center">74.4</td>
246
+ </tr>
247
+ <tr>
248
+ <td><strong>Mi:dm 2.0-Base-inst</strong></td>
249
+ <td align="center"><strong>89.6</strong></td>
250
+ <td align="center"><strong>86.4</strong></td>
251
+ <td align="center"><strong>56.3</strong></td>
252
+ <td align="center"><strong>81.5</strong></td>
253
+ <td align="center"><strong>78.4</strong></td>
254
+ <td align="center"><strong>57.3</strong></td>
255
+ <td align="center"><strong>58.0</strong></td>
256
+ <td align="center"><strong>57.7</strong></td>
257
+ <td align="center">82</td>
258
+ <td align="center"><strong>89.7</strong></td>
259
+ <td align="center"><strong>85.9</strong></td>
260
+ </tr>
261
+ </table>
262
+
263
+ <!-- second half table-->
264
+ <table>
265
+ <tr>
266
+ <th rowspan="2" align="center">Model</th>
267
+ <th colspan="5" align="center">Comprehension</th>
268
+ <th colspan="5" align="center">Reasoning</th>
269
+ </tr>
270
+ <tr>
271
+ <th align="center">K-Prag<sup>*</sup></th>
272
+ <th align="center">K-Refer-Hard<sup>*</sup></th>
273
+ <th align="center">Ko-Best</th>
274
+ <th align="center">Ko-Sovereign<sup>*</sup></th>
275
+ <th align="center">Avg.</th>
276
+ <th align="center">Ko-Winogrande</th>
277
+ <th align="center">Ko-Best</th>
278
+ <th align="center">LogicKor</th>
279
+ <th align="center">HRM8K</th>
280
+ <th align="center">Avg.</th>
281
+ </tr>
282
+
283
+ <!-- Small Models -->
284
+ <tr>
285
+ <td><strong>Qwen3-4B</strong></td>
286
+ <td align="center"><strong>73.9<strong></td>
287
+ <td align="center">56.7</td>
288
+ <td align="center"><strong>91.5</strong></td>
289
+ <td align="center"><strong>43.5</strong></td>
290
+ <td align="center"><strong>66.6</strong></td>
291
+ <td align="center"><strong>67.5</strong></td>
292
+ <td align="center"><strong>69.2</strong></td>
293
+ <td align="center">5.6</td>
294
+ <td align="center"><strong>56.7</strong></td>
295
+ <td align="center"><strong>43.8</strong></td>
296
+ </tr>
297
+ <tr>
298
+ <td><strong>Exaone-3.5-2.4B-inst</strong></td>
299
+ <td align="center">68.7</td>
300
+ <td align="center"><strong>58.5</strong></td>
301
+ <td align="center">87.2</td>
302
+ <td align="center">38.0</td>
303
+ <td align="center">62.5</td>
304
+ <td align="center">60.3</td>
305
+ <td align="center">64.1</td>
306
+ <td align="center">7.4</td>
307
+ <td align="center">38.5</td>
308
+ <td align="center">36.7</td>
309
+ </tr>
310
+ <tr>
311
+ <td><strong>Mi:dm 2.0-Mini-inst</strong></td>
312
+ <td align="center">69.5</td>
313
+ <td align="center">55.4</td>
314
+ <td align="center">80.5</td>
315
+ <td align="center">42.5</td>
316
+ <td align="center">61.9</td>
317
+ <td align="center">61.7</td>
318
+ <td align="center">64.5</td>
319
+ <td align="center"><strong>7.7</strong></td>
320
+ <td align="center">39.9</td>
321
+ <td align="center">37.4</td>
322
+ </tr>
323
+
324
+ <!-- Visual Spacer -->
325
+ <tr><td colspan="11"> </td></tr>
326
+
327
+ <!-- Large Models -->
328
+ <tr>
329
+ <td><strong>Qwen3-14B</strong></td>
330
+ <td align="center"><strong>86.7</strong></td>
331
+ <td align="center"><strong>74.0</strong></td>
332
+ <td align="center">93.9</td>
333
+ <td align="center">52.0</td>
334
+ <td align="center"><strong>76.8</strong></td>
335
+ <td align="center"><strong>77.2</strong></td>
336
+ <td align="center"><strong>75.4</strong></td>
337
+ <td align="center">6.4</td>
338
+ <td align="center"><strong>64.5</strong></td>
339
+ <td align="center"><strong>48.8</strong></td>
340
+ </tr>
341
+ <tr>
342
+ <td><strong>Llama-3.1-8B-inst</strong></td>
343
+ <td align="center">59.9</td>
344
+ <td align="center">48.6</td>
345
+ <td align="center">77.4</td>
346
+ <td align="center">31.5</td>
347
+ <td align="center">51.5</td>
348
+ <td align="center">40.1</td>
349
+ <td align="center">26.0</td>
350
+ <td align="center">2.4</td>
351
+ <td align="center">30.9</td>
352
+ <td align="center">19.8</td>
353
+ </tr>
354
+ <tr>
355
+ <td><strong>Exaone-3.5-7.8B-inst</strong></td>
356
+ <td align="center">73.5</td>
357
+ <td align="center">61.9</td>
358
+ <td align="center">92.0</td>
359
+ <td align="center">44.0</td>
360
+ <td align="center">67.2</td>
361
+ <td align="center">64.6</td>
362
+ <td align="center">60.3</td>
363
+ <td align="center"><strong>8.6</strong></td>
364
+ <td align="center">49.7</td>
365
+ <td align="center">39.5</td>
366
+ </tr>
367
+ <tr>
368
+ <td><strong>Mi:dm 2.0-Base-inst</strong></td>
369
+ <td align="center">86.5</td>
370
+ <td align="center">70.8</td>
371
+ <td align="center"><strong>95.2</strong></td>
372
+ <td align="center"><strong>53.0</strong></td>
373
+ <td align="center">76.1</td>
374
+ <td align="center">75.1</td>
375
+ <td align="center">73.0</td>
376
+ <td align="center"><strong>8.6</strong></td>
377
+ <td align="center">52.9</td>
378
+ <td align="center">44.8</td>
379
+ </tr>
380
+ </table>
381
+
382
+ `*` indicates KT proprietary evaluation resources.
383
+
384
+ <br>
385
+
386
+
387
+ #### English
388
+
389
+
390
+ <table>
391
+ <tr>
392
+ <th rowspan="2" align="center">Model</th>
393
+ <th align="center">Instruction</th>
394
+ <th colspan="4" align="center">Reasoning</th>
395
+ <th align="center">Math</th>
396
+ <th align="center">Coding</th>
397
+ <th colspan="3" align="center">General Knowledge</th>
398
+ </tr>
399
+ <tr>
400
+ <th align="center">IFEval</th>
401
+ <th align="center">BBH</th>
402
+ <th align="center">GPQA</th>
403
+ <th align="center">MuSR</th>
404
+ <th align="center">Avg.</th>
405
+ <th align="center">GSM8K</th>
406
+ <th align="center">MBPP+</th>
407
+ <th align="center">MMLU-pro</th>
408
+ <th align="center">MMLU</th>
409
+ <th align="center">Avg.</th>
410
+ </tr>
411
+
412
+ <!-- Small Models -->
413
+ <tr>
414
+ <td><strong>Qwen3-4B</strong></td>
415
+ <td align="center">79.7</td>
416
+ <td align="center"><strong>79.0</strong></td>
417
+ <td align="center"><strong>39.8</strong></td>
418
+ <td align="center"><strong>58.5</strong></td>
419
+ <td align="center"><strong>59.1</strong></td>
420
+ <td align="center"><strong>90.4</strong></td>
421
+ <td align="center">62.4</td>
422
+ <td align="center">-</td>
423
+ <td align="center"><strong>73.3</strong></td>
424
+ <td align="center"><strong>73.3</strong></td>
425
+ </tr>
426
+ <tr>
427
+ <td><strong>Exaone-3.5-2.4B-inst</strong></td>
428
+ <td align="center"><strong>81.1</strong></td>
429
+ <td align="center">46.4</td>
430
+ <td align="center">28.1</td>
431
+ <td align="center">49.7</td>
432
+ <td align="center">41.4</td>
433
+ <td align="center">82.5</td>
434
+ <td align="center">59.8</td>
435
+ <td align="center">-</td>
436
+ <td align="center">59.5</td>
437
+ <td align="center">59.5</td>
438
+ </tr>
439
+ <tr>
440
+ <td><strong>Mi:dm 2.0-Mini-inst</strong></td>
441
+ <td align="center">73.6</td>
442
+ <td align="center">44.5</td>
443
+ <td align="center">26.6</td>
444
+ <td align="center">51.7</td>
445
+ <td align="center">40.9</td>
446
+ <td align="center">83.1</td>
447
+ <td align="center"><strong>60.9</strong></td>
448
+ <td align="center">-</td>
449
+ <td align="center">56.5</td>
450
+ <td align="center">56.5</td>
451
+ </tr>
452
+
453
+ <tr><td colspan="11">&nbsp;</td></tr>
454
+
455
+ <!-- Large Models -->
456
+ <tr>
457
+ <td><strong>Qwen3-14B</strong></td>
458
+ <td align="center">83.9</td>
459
+ <td align="center"><strong>83.4</strong></td>
460
+ <td align="center"><strong>49.8</strong></td>
461
+ <td align="center"><strong>57.7</strong></td>
462
+ <td align="center"><strong>63.6</strong></td>
463
+ <td align="center">88.0</td>
464
+ <td align="center">73.4</td>
465
+ <td align="center"><strong>70.5</strong></td>
466
+ <td align="center"><strong>82.7</strong></td>
467
+ <td align="center"><strong>76.6</strong></td>
468
+ </tr>
469
+ <tr>
470
+ <td><strong>Llama-3.1-8B-inst</strong></td>
471
+ <td align="center">79.9</td>
472
+ <td align="center">60.3</td>
473
+ <td align="center">21.6</td>
474
+ <td align="center">50.3</td>
475
+ <td align="center">44.1</td>
476
+ <td align="center">81.2</td>
477
+ <td align="center"><strong>81.8</strong></td>
478
+ <td align="center">47.6</td>
479
+ <td align="center">70.7</td>
480
+ <td align="center">59.2</td>
481
+ </tr>
482
+ <tr>
483
+ <td><strong>Exaone-3.5-7.8B-inst</strong></td>
484
+ <td align="center">83.6</td>
485
+ <td align="center">50.1</td>
486
+ <td align="center">33.1</td>
487
+ <td align="center">51.2</td>
488
+ <td align="center">44.8</td>
489
+ <td align="center">81.1</td>
490
+ <td align="center">79.4</td>
491
+ <td align="center">40.7</td>
492
+ <td align="center">69.0</td>
493
+ <td align="center">54.8</td>
494
+ </tr>
495
+ <tr>
496
+ <td><strong>Mi:dm 2.0-Base-inst</strong></td>
497
+ <td align="center"><strong>84.0</strong></td>
498
+ <td align="center">77.7</td>
499
+ <td align="center">33.5</td>
500
+ <td align="center">51.9</td>
501
+ <td align="center">54.4</td>
502
+ <td align="center"><strong>91.6</strong></td>
503
+ <td align="center">77.5</td>
504
+ <td align="center">53.3</td>
505
+ <td align="center">73.7</td>
506
+ <td align="center">63.5</td>
507
+ </tr>
508
+ </table>
509
+
510
+
511
+ <br>
512
+
513
+ # Usage
514
+
515
+ ### Run on Friendli.AI
516
+ You can try our model immediately via `Friendli.AI`. Simply click `Deploy` and then `Friendli Endpoints`.
517
+
518
+ > [!Note]
519
+ > Please note that a login to `Friendli.AI` is required after your fifth chat interaction.
520
+
521
+ <p>
522
+ <img src="./assets/image_1.png" alt="Left Image" width="36%" style="display:inline-block; margin-right:2%">
523
+ <img src="./assets/image_2.png" alt="Right Image" width="36%" style="display:inline-block">
524
+ </p>
525
+
526
+
527
+ ### Run on Your Local Machine
528
+ We provide a detailed description about running Mi:dm 2.0 on your local machine using llama.cpp, LM Studio, and Ollama. Please check our [github](https://github.com/K-intelligence-Midm/Midm-2.0) for more information
529
+
530
+
531
+ ### Deployment
532
+
533
+ To serve Mi:dm 2.0 using [vLLM](https://github.com/vllm-project/vllm)(`>=0.8.0`) with an OpenAI-compatible API:
534
+ ```bash
535
+ vllm serve K-intelligence/Midm-2.0-Base-Instruct
536
+ ```
537
+
538
+
539
+ ### Tutorials
540
+ To help our end-users easily use Mi:dm 2.0, we have provided comprehensive tutorials on [github](https://github.com/K-intelligence-Midm/Midm-2.0).
541
+ <br>
542
+
543
+ <br>
544
+ <br>
545
+
546
+ # More Information
547
+
548
+ ### Limitation
549
+ * The training data for both Mi:dm 2.0 models consists primarily of English and Korean. Understanding and generation in other languages are not guaranteed.
550
+
551
+ * The model is not guaranteed to provide reliable advice in fields that require professional expertise, such as law, medicine, or finance.
552
+
553
+ * Researchers have made efforts to exclude unethical content from the training data — such as profanity, slurs, bias, and discriminatory language. However, despite these efforts, the model may still produce inappropriate expressions or factual inaccuracies.
554
+
555
+
556
+ ### License
557
+
558
+ Mi:dm 2.0 is licensed under the [MIT License](./LICENSE).
559
+
560
+ <!-- ### Citation
561
+
562
+ ```
563
+ @misc{,
564
+ title={},
565
+ author={},
566
+ year={2025},
567
+ eprint={},
568
+ archivePrefix={arXiv},
569
+ primaryClass={cs.CL},
570
+ url={},
571
+ }
572
+ ``` -->
573
+ ### Contact
574
+ Mi:dm 2.0 Technical Inquiries: midm-llm@kt.com
575
+
576
+ <br>