Cannot deploy with vllm on p4de.24xlarge with vllmV1 using --tensor-parallelization 8
Hello Team,
I am unable to deploy the FP8 model, it seems that the sharding does not work?
Anyone seeing it too?
Note that I am able to deploy the unquantized Qwen3-Coder-Next on the same instance without a problem
Here is the config of the instance.
$nvidia-smi
NVIDIA-SMI 570.133.20 Driver Version: 570.133.20 CUDA Version: 12.8
...
| 0 NVIDIA A100-SXM4-80GB On | 00000000:xx:xx.x Off | 0 |
| N/A 52C P0 76W / 400W | 0MiB / 81920MiB | 0% Default |
...
ERROR AM SEEING
Detected some but not all shards of model.layers.0.linear_attn.in_proj are quantized. All shards of fused layers to have the same precision.
I have a question which might be unrelated, why 8 x A100, what sort of tokens capacity / tokens per second are you planning to process ?
Just trying to push the limit in terms of context size and speed generation, no specific goal in mind.
@HenryGuillaumet
Its not possible to run fp8 on A100, you have to use blackwell, hopper or Ada GPUs
Example: H100, H200, L40s, B200
Do you mind if I ask questions : we are a new stack helping people run inferencing faster than vanilla deployment. A one sentence would be
open-weight inference: one-click deployment, automatic optimization, and reliable capacity so teams ship faster, pay less per outcome, and don’t think about infrastructure.
Is this something you would care or solve a problem of yours ?
Thanks for your answer, however it is possible to run FP8 on A100, it falls back to Marlin which is less optimized but definitely possible as I was able to run other FP8 models.
I am not interested thanks.
Hi
@HenryGuillaumet
I ran on 2x A100 and I did not have any issue, you were right about Marlin kernels and it did fallback but with a warning that heavy tasks maybe slower.
Speed is around 15 tokens a second, will be lower on larger context length.
CUDA_VISIBLE_DEVICES=0,1 vllm serve Qwen/Qwen3-Coder-Next-FP8 \
--tensor-parallel-size 2 \
--max-num-seqs 400 \
--max-model-len 15000 \
--disable-custom-all-reduce

