Questions regarding LoRA compatibility (Base vs Turbo) and Ecosystem support
First off, massive thanks to the team for releasing this model! The specs look incredible (6B params + Qwen2.5-VL), and having an open-source editing model with this level of efficiency is exactly what the community needs right now. Can't wait to test it thoroughly.
I have a couple of questions regarding training and integration:
LoRA Compatibility: If I train a LoRA on the base LongCat-Image-Edit model, will it be compatible and effective when used with this Turbo (distilled) version? Or does the distillation process require LoRAs to be trained directly on the Turbo weights?
Ecosystem Support: Are there any plans for official support or integration scripts for popular community tools? Specifically, we are looking forward to seeing this in:
- ComfyUI (Native nodes or custom node support)
- Training frameworks: SimpleTuner, AI-Toolkit (Ostris), musubi-tuner, or DiffSynth-Studio.
Getting this into the hands of fine-tuners and workflow builders via these tools would be huge for adoption.
Thanks again for your hard work!