When I run on A800 , it throws error that ValueError: FP8 quantized models is only supported on GPUs with compute capability >= 8.9 (e.g 4090/H100), actual = 8.0

#113
by KYYYDS - opened

image
use LLama-factory ,lora ,deepseek-r1-0528

Sign up or log in to comment