correction when using vllm 0.12

#3
by nebi - opened

in the params.json file you seemingly need to change "quantization" to "quantization_config" else an error is being thrown out. This did not happen previously to vllm 0.12

Sign up or log in to comment