Papers
arxiv:2408.11788

DreamFactory: Pioneering Multi-Scene Long Video Generation with a Multi-Agent Framework

Published on Aug 21, 2024
Authors:
,
,
,
,
,

Abstract

DreamFactory is an LLM-based framework that generates long, stylistically coherent videos using multi-agent collaboration and key frames iteration design.

AI-generated summary

Current video generation models excel at creating short, realistic clips, but struggle with longer, multi-scene videos. We introduce DreamFactory, an LLM-based framework that tackles this challenge. DreamFactory leverages multi-agent collaboration principles and a Key Frames Iteration Design Method to ensure consistency and style across long videos. It utilizes Chain of Thought (COT) to address uncertainties inherent in large language models. DreamFactory generates long, stylistically coherent, and complex videos. Evaluating these long-form videos presents a challenge. We propose novel metrics such as Cross-Scene Face Distance Score and Cross-Scene Style Consistency Score. To further research in this area, we contribute the Multi-Scene Videos Dataset containing over 150 human-rated videos.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2408.11788 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2408.11788 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2408.11788 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.