Spaces:
Running
Running
Apply for a GPU community grant: Personal project
#1
by
mail0000009 - opened
I am hosting a demo for the Veena model by Maya Research. I am integrating this as an API for an n8n automation workflow to provide low-latency voice generation. Currently, CPU inference takes 30+ seconds, which is too slow for real-time automation. I would like to request ZeroGPU to make this functional for the community.