File size: 2,041 Bytes
9056611
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---

license: mit
tags:
- text-to-video
- prompt-engineering
- video-generation
- llm
- rag
- research
datasets:
- junchenfu/llmpopcorn_prompts
pipeline_tag: text-generation
---


# LLMPopcorn Usage Instructions

Welcome to LLMPopcorn! This guide will help you generate video titles and prompts, as well as create AI-generated videos based on those prompts.

## Prerequisites

### Install Required Python Packages

Before running the scripts, ensure that you have installed the necessary Python packages. You can do this by executing the following command:

```bash

pip install torch transformers diffusers tqdm numpy pandas sentence-transformers faiss-cpu openai huggingface_hub safetensors

```

**Download the Dataset**:  
Download the Microlens dataset and place it in the `Microlens` folder for use with `PE.py`.

## Step 1: Generate Video Titles and Prompts

To generate video titles and prompts, run the `LLMPopcorn.py` script:
```bash

python LLMPopcorn.py

```

To enhance LLMPopcorn, execute the `PE.py` script:
```bash

python PE.py

```

## Step 2: Generate AI Videos

To create AI-generated videos, execute the `generating_images_videos_three.py` script:
```bash

python generating_images_videos_three.py

```

## Step 3: Clone the Evaluation Code

Then, following the instructions in the MMRA repository, you can evaluate the generated videos.

## Tutorial: Using the Prompts Dataset

You can easily download and use the structured prompts directly from Hugging Face:

### 1. Install `datasets`
```bash

pip install datasets

```

### 2. Load the Dataset in Python
```python

from datasets import load_dataset



# Load the LLMPopcorn prompts

dataset = load_dataset("junchenfu/llmpopcorn_prompts")



# Access the data (abstract or concrete)

for item in dataset["train"]:

    print(f"Type: {item['type']}, Prompt: {item['prompt']}")

```

This dataset contains both abstract and concrete prompts, which you can use as input for the video generation scripts in Step 2.