llama-factory sft 微调

#25
by liuweixiong - opened

大家有没有尝试使用mimo-v2-flash进行sft微调呢,我尝试时候llamafactory进行sft微调
配置全参微调的时候代码报错:Quantized models can only be used for the LoRA or OFT tuning.

参考llama-factory里MiMo-V2-flash的lora示例启动后,报错:The model you are trying to fine-tune is quantized with QuantizationMethod.FP8 but that quantization method do not support training. Please open an issue on GitHub: https://github.com/huggingface/transformers to request the support for training support for QuantizationMethod.FP8

大家有没有遇到过,请问这里需要做什么调整吗

Sign up or log in to comment