any plan for release of reproducing benchmark result, or huggingface support?
#4
by
seastar105
- opened
qwen-omni support huggingface support, so we can just do inference or check benchmark setting with only one gpu. Is there any plan to support transformers only inference of omni model without OmniServe? or could you provide any example?