Example Fine-Tuning Script
Hi OmLab team,
Thank you for sharing the omlab/omdet-turbo-swin-tiny-hf model — it's an exciting contribution to the open-vocabulary object detection space.
I'm currently working on fine-tuning this model on a domain-specific dataset using attribute-rich captions. I’d like to train model using LoRA for efficient adaptation. However, since this model has a custom architecture, it would be very helpful to have an official or minimal working example of a fine-tuning script.
Could you please provide an example or notebook that demonstrates how to fine-tune this model, particularly with:
A custom dataset in COCO-style format (or your preferred format),
LoRA integration (e.g., via peft),
Focused fine-tuning on model,
Evaluation or inference pipeline after training.
Such a script would be a great starting point for the community and help others quickly adapt the model to different domains.
Thanks in advance for your support!
Best regards,
Computer Engineer Onur ULU