JoyAI-LLM-Flash / docs /deploy_guidance.md
CoruNethron's picture
Duplicate from jdopensource/JoyAI-LLM-Flash
d52bfd1

JoyAI-LLM Flash Deployment Guide

This guide offers a selection of deployment command examples for JoyAI-LLM Flash, which may not be the optimal configuration. Given the rapid evolution of inference engines, we recommend referring to their official documentation for the latest updates to ensure peak performance.

Support for JoyAI-LLM Flash’s dense MTP architecture is currently being integrated into vLLM and SGLang. Until these PRs are merged into a stable release, please use the nightly Docker image for access to these features.

vLLM Deployment

Here is the example to serve this model on a H200 single node with TP8 via vLLM:

  1. pull the Docker image.
docker pull jdopensource/joyai-llm-vllm:v0.13.0-joyai_llm_flash
  1. launch JoyAI-LLM Flash model with dense MTP.
vllm serve ${MODEL_PATH} --tp 8 --trust-remote-code \
  --tool-call-parser qwen3_coder --enable-auto-tool-choice \
  --speculative-config $'{"method": "mtp", "num_speculative_tokens": 3}'

Key notes

  • --tool-call-parser qwen3_coder: Required for enabling tool calling

SGLang Deployment

Similarly, here is the example to run with TP8 on H200 in a single node via SGLang:

  1. pull the Docker image.
docker pull jdopensource/joyai-llm-sglang:v0.5.8-joyai_llm_flash
  1. launch JoyAI-LLM Flash model with dense MTP.
python3 -m sglang.launch_server --model-path ${MODEL_PATH} --tp-size 8 --trust-remote-code \
  --tool-call-parser qwen3_coder \
  --speculative-algorithm EAGLE --speculative-draft-model-path ${MTP_MODEL_PATH} \
  --speculative-num-steps 2 --speculative-eagle-topk 2 --speculative-num-draft-tokens 3

Key notes:

  • --tool-call-parser qwen3_coder: Required when enabling tool usage.