Spaces:
Running on Zero
Apply for a GPU community grant: Academic project
SynLayers is a research project that decomposes a real-world image into editable, semantically meaningful transparent RGBA layers. Our public Space demonstrates a two-stage pipeline that combines vision-language understanding for caption and bounding-box prediction with diffusion-based layer decomposition, allowing users to upload an image and obtain structured layered results for editing, analysis, and content creation.
Project Page: https://yanghaolin0526.github.io/SynLayers/
GitHub Repository: https://github.com/YangHaolin0526/SynLayers
The associated research paper is currently under review. We hope to make the demo publicly accessible to researchers, creators, and the broader AIGC community.
For a public demo, CPU-only inference is unrunnable and leads to poor user experience because the full pipeline is computationally intensive. A community GPU grant would allow us to run the complete system reliably on Hugging Face Spaces, significantly reduce latency, and make SynLayers more accessible to researchers, creators, and the broader AIGC community.
The current pipeline requires a high-memory GPU for stable inference (more than 70GB VRAM during execution). Access to an A100 Large or a similar high-memory GPU would be greatly appreciated.
Looking forward to your reply!
Hi @SynLayers , we've assigned ZeroGPU to this Space. Please check the compatibility and usage sections of this page so your Space can run on ZeroGPU.
If you can, we ask that you upgrade to Pro ($9/month) to enjoy higher ZeroGPU quota and other features like Dev Mode, Private Storage, and more: hf.co/pro
Thank you for granting ZeroGPU. We updated requirements.txt to use a supported torch version, but when switching back to ZeroGPU, the UI shows: “Only Hugging Face team members can update the sleep time of a Space with a community grant.” Could you help us enable ZeroGPU again or reset the hardware/sleep-time configuration?
Done!
Thanks for your support. Our demo relies on multiple large vision-language models, and the model initialization/loading process alone can exceed the current ZeroGPU execution time limit. As a result, the application may time out before inference even begins. We would greatly appreciate any possibility of extending the GPU execution time limit for our Space, which would significantly improve the usability and stability of the demo for the community.
@SynLayers I haven't looked at your code in detail, but if it's loading models lazily inside the GPU function, that's a common anti-pattern on Spaces and would explain the timeout.
FYI, a ZeroGPU coding skill for AI agents is currently in review at https://github.com/huggingface/skills/pull/138. It's not merged yet, but it should be helpful for adapting your Space to ZeroGPU, so feel free to try it with your coding agent of choice.