Spaces:
Running on Zero
Apply for a GPU community grant: Academic project
LLMPopcorn: LLM-Assisted Popular Micro-Video Generation (ICASSP 2026)
We are a research group from the University of Glasgow. This Space is the official demo for our ICASSP 2026 paper "LLMPopcorn: Exploring LLMs as Assistants for Popular Micro-video Generation" (arXiv: 2502.12945).
What the demo does:
The demo takes a user's video idea as input and automatically generates a trending-optimized title, and a 3-second video clip using a text-to-video diffusion model. It showcases two modes:
Basic โ direct LLM generation
PE (Prompt Enhancement) โ RAG + Chain-of-Thought reasoning over 19,560 real-world TikTok-style videos from the MicroLens dataset
Why we need GPU resources:
The pipeline requires running two large models simultaneously:
Text-to-video model: Lightricks/LTX-Video (2B parameters) for generating the actual video clip8B parameters) for title and prompt generation with RAG + CoT reasoning
LLM: meta-llama/Llama-3.1-8B-Instruct (
Both models require GPU for any reasonable inference speed. Without GPU, the demo is non-functional for end users.
Impact:
This is an academic open-science project. The demo, datasets, and code are fully open-sourced on GitHub (GAIR-Lab/LLMPopcorn) and Hugging Face. We hope this demo helps the research community explore LLM-driven video content creation. The paper has already attracted interest from the HF team (we were reached out to by @NielsRogge for submission to hf.co/papers).
We would greatly appreciate a ZeroGPU grant to make this demo publicly accessible.
Hi @junchenfu , we've assigned ZeroGPU to this Space. Please check the compatibility and usage sections of this page so your Space can run on ZeroGPU.
If you can, we ask that you upgrade to Pro ($9/month) to enjoy higher ZeroGPU quota and other features like Dev Mode, Private Storage, and more: hf.co/pro