Single RTX Pro 6000 users
Hi - many thanks unsloth for this exceptionally fast quant job!!
Does anyone know whether this fits on a single RTX Pro 6000 96 GB VRAM? (on reddit I have seen some claim that it should work with vllm)
If it fits:
-) what kind of pp/tg can one expect on sm120 as soon as the context is filled with up to 20k tokens?
-) which inference engine gives the best performance with reliable tool support on a single RTX Pro 6000? Can you share your launch/docker command?
Thanks!!
Absolutely it rips! On RTX 6000 you get 80-120 toks/sec that holds well at long context and with concurrent requests. Insane prompt processing 6K-10K/sec - pasting a 15 pages doc to ask a summary is a 2 seconds thing.
That's why I'm excited about the coder version - if developing for example (sub-)agentic tools it could allow very fast iteration locally if it's good enough to handle the test tasks, on top of being a decent coding assistant & also do IDE auto-complete while at it.
Here's my local vllm command which uses around 92 of 96GBvllm serve Qwen/Qwen3-Next-80B-A3B-Instruct-FP8 \ --port ${PORT} \ --enable-chunked-prefill \ --max-model-len 262144 \ --max-num-seqs 4 \ --max-num-batched-tokens 16384 \ --tool-call-parser hermes \ --chat-template-content-format string \ --enable-auto-tool-choice \ --disable-custom-all-reduce \ --gpu-memory-utilization 0.95
Ok, tried it and with vllm 0.16.0rc1.dev158+g2a99c5a6c.precompiled the suggested launch command just lead to an OOM error.
This works however:
vllm serve /path/to/unsloth/Qwen3-Coder-Next-FP8-Dynamic \
--port ${PORT} \
--max-model-len 200000 \
--max-num-seqs 2 \
--tool-call-parser qwen3_coder \
--enable-auto-tool-choice \
--gpu-memory-utilization 0.93 \
--enable-sleep-mode \
--attention-backend FLASHINFER \
--served-model-name qwen3-coder-next \
--enable-prefix-caching
I am seeing about 8000 tokens/sec pp and 130 tokens/sec tg on a single concurrent request at context size of about 20k tokens (RTX Pro 6000 @ 300W).
About 50 tool calls succeeded until now without errors. Model makes a very good first impression!
Oh nice! Sorry didn't respond earlier - this is very cool!
@lightenup
Yes, the gpu-memory-utilization 0.93 is really critical, thanks. Even 0.95 fails with FP8 K/V cache. Seems like additional headroom is needed with this model.