Midm-LLM commited on
Commit
f155867
·
verified ·
1 Parent(s): e87c58a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -1
README.md CHANGED
@@ -28,7 +28,7 @@ library_name: transformers
28
 
29
  # News 📢
30
 
31
- - 🔜 _(Coming Soon!) GGUF format model files will be available soon for easier local deployment._
32
  - 📕`2025/08/08`: Published a technical blog article about Mi:dm 2.0 Model.
33
  - ⚡️`2025/07/04`: Released Mi:dm 2.0 Model collection on Hugging Face🤗.
34
  <br>
@@ -527,11 +527,32 @@ We provide a detailed description about running Mi:dm 2.0 on your local machine
527
 
528
  ## Deployment
529
 
 
 
530
  To serve Mi:dm 2.0 using [vLLM](https://github.com/vllm-project/vllm)(`>=0.8.0`) with an OpenAI-compatible API:
531
  ```bash
532
  vllm serve K-intelligence/Midm-2.0-Mini-Instruct
533
  ```
534
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
535
 
536
  ## Tutorials
537
  To help our end-users easily use Mi:dm 2.0, we have provided comprehensive tutorials on [github](https://github.com/K-intelligence-Midm/Midm-2.0).
 
28
 
29
  # News 📢
30
 
31
+ - 🔧`2025/10/29`: Added support for function calling on vLLM with Mi:dm 2.0 parser.
32
  - 📕`2025/08/08`: Published a technical blog article about Mi:dm 2.0 Model.
33
  - ⚡️`2025/07/04`: Released Mi:dm 2.0 Model collection on Hugging Face🤗.
34
  <br>
 
527
 
528
  ## Deployment
529
 
530
+ #### Basic Serving
531
+
532
  To serve Mi:dm 2.0 using [vLLM](https://github.com/vllm-project/vllm)(`>=0.8.0`) with an OpenAI-compatible API:
533
  ```bash
534
  vllm serve K-intelligence/Midm-2.0-Mini-Instruct
535
  ```
536
 
537
+ #### With Function Calling
538
+
539
+ For advanced function calling tasks, you can serve Mi:dm 2.0 with our own tool parser:
540
+ 1. Download and place [Mi:dm 2.0 parser file](https://github.com/K-intelligence-Midm/Midm-2.0/blob/main/tutorial/03_open-webui/modelfile/midm_parser.py) in your working directory.
541
+ 2. Run the following Docker command to launch the vLLM server with our custom parser file:
542
+ ```bash
543
+ docker run --rm -it --gpus all -p 8000:8000 \
544
+ -e HUGGING_FACE_HUB_TOKEN="<YOUR_HUGGINGFACE_TOKEN>" \
545
+ -v "$(pwd)/midm_parser.py:/custom/midm_parser.py" \
546
+ vllm/vllm-openai:v0.11.0 \
547
+ --model K-intelligence/Midm-2.0-Base-Instruct \
548
+ --enable-auto-tool-choice \
549
+ --tool-parser-plugin /custom/midm_parser.py \
550
+ --tool-call-parser midm-parser \
551
+ --host 0.0.0.0
552
+ ```
553
+
554
+ >[!Note]
555
+ > This setup is compatible with `vllm/vllm-openai:v0.8.0` and later, but we strongly recommend using `v0.11.0` for optimal stability and compatibility with our parser.
556
 
557
  ## Tutorials
558
  To help our end-users easily use Mi:dm 2.0, we have provided comprehensive tutorials on [github](https://github.com/K-intelligence-Midm/Midm-2.0).