license: mit
tags:
- text-to-video
- prompt-engineering
- video-generation
- llm
- rag
- research
datasets:
- junchenfu/llmpopcorn_prompts
pipeline_tag: text-generation
LLMPopcorn Usage Instructions
Welcome to LLMPopcorn! This guide will help you generate video titles and prompts, as well as create AI-generated videos based on those prompts.
Prerequisites
Install Required Python Packages
Before running the scripts, ensure that you have installed the necessary Python packages. You can do this by executing the following command:
pip install torch transformers diffusers tqdm numpy pandas sentence-transformers faiss-cpu openai huggingface_hub safetensors
Download the Dataset:
Download the Microlens dataset and place it in the Microlens folder for use with PE.py.
Step 1: Generate Video Titles and Prompts
To generate video titles and prompts, run the LLMPopcorn.py script:
python LLMPopcorn.py
To enhance LLMPopcorn, execute the PE.py script:
python PE.py
Step 2: Generate AI Videos
To create AI-generated videos, execute the generating_images_videos_three.py script:
python generating_images_videos_three.py
Step 3: Clone the Evaluation Code
Then, following the instructions in the MMRA repository, you can evaluate the generated videos.
Tutorial: Using the Prompts Dataset
You can easily download and use the structured prompts directly from Hugging Face:
1. Install datasets
pip install datasets
2. Load the Dataset in Python
from datasets import load_dataset
# Load the LLMPopcorn prompts
dataset = load_dataset("junchenfu/llmpopcorn_prompts")
# Access the data (abstract or concrete)
for item in dataset["train"]:
print(f"Type: {item['type']}, Prompt: {item['prompt']}")
This dataset contains both abstract and concrete prompts, which you can use as input for the video generation scripts in Step 2.