Upload 33 files
Browse files- FIX_DEPENDENCIES.md +80 -0
- FREE_ALTERNATIVES.md +199 -0
- GITHUB_PUSH_INSTRUCTIONS.md +163 -0
- LONGER_VIDEOS.md +236 -0
- ONLINE_VS_LOCAL.md +100 -0
- PROJECT_SUMMARY.md +195 -0
- QUICKSTART_LOCAL.md +57 -0
- README.md +242 -0
- README_ENHANCED.md +124 -0
- README_GITHUB.md +294 -0
- README_LOCAL.md +221 -0
- REPLICATE_SETUP.md +211 -0
- SETUP.md +130 -0
- SOLUTION_GUIDE.md +171 -0
- VERCEL_DEPLOY_GUIDE.md +286 -0
- VideoAI_Free_Colab.ipynb +340 -0
- app.log +326 -0
- app_local.log +30 -0
- backend.py +149 -0
- backend_enhanced.py +368 -0
- backend_local.py +239 -0
- backend_replicate.py +136 -0
- backend_simple.py +108 -0
- index.html +378 -0
- index_demo.html +453 -0
- index_enhanced.html +861 -0
- index_local.html +567 -0
- models_config.py +178 -0
- package.json +10 -0
- requirements.txt +6 -0
- requirements_local.txt +11 -0
- start_local.bat +49 -0
- start_local.sh +48 -0
FIX_DEPENDENCIES.md
ADDED
|
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π§ Fix Dependency Issues
|
| 2 |
+
|
| 3 |
+
If you're seeing the transformers error, follow these steps:
|
| 4 |
+
|
| 5 |
+
## Quick Fix
|
| 6 |
+
|
| 7 |
+
Run these commands in order:
|
| 8 |
+
|
| 9 |
+
```bash
|
| 10 |
+
# 1. Uninstall conflicting packages
|
| 11 |
+
pip uninstall -y transformers diffusers
|
| 12 |
+
|
| 13 |
+
# 2. Install compatible versions
|
| 14 |
+
pip install transformers==4.44.2
|
| 15 |
+
pip install diffusers==0.30.3
|
| 16 |
+
pip install sentencepiece
|
| 17 |
+
pip install protobuf
|
| 18 |
+
|
| 19 |
+
# 3. Restart the backend
|
| 20 |
+
python backend_local.py
|
| 21 |
+
```
|
| 22 |
+
|
| 23 |
+
## Full Clean Install
|
| 24 |
+
|
| 25 |
+
If the quick fix doesn't work, do a complete reinstall:
|
| 26 |
+
|
| 27 |
+
```bash
|
| 28 |
+
# 1. Create a fresh virtual environment
|
| 29 |
+
python3 -m venv venv_clean
|
| 30 |
+
source venv_clean/bin/activate # On Windows: venv_clean\Scripts\activate
|
| 31 |
+
|
| 32 |
+
# 2. Install PyTorch first (GPU version)
|
| 33 |
+
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
|
| 34 |
+
|
| 35 |
+
# Or for CPU only:
|
| 36 |
+
# pip install torch torchvision torchaudio
|
| 37 |
+
|
| 38 |
+
# 3. Install all requirements
|
| 39 |
+
pip install -r requirements_local.txt
|
| 40 |
+
|
| 41 |
+
# 4. Run the backend
|
| 42 |
+
python backend_local.py
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
## What Was the Problem?
|
| 46 |
+
|
| 47 |
+
The error occurred because:
|
| 48 |
+
- **Old transformers version** didn't have the correct T5 tokenizer loading methods
|
| 49 |
+
- **Missing sentencepiece** library (required for T5 tokenizer)
|
| 50 |
+
- **Incompatible diffusers version** with the transformers library
|
| 51 |
+
|
| 52 |
+
## Verified Working Versions
|
| 53 |
+
|
| 54 |
+
These versions are tested and working:
|
| 55 |
+
- `transformers>=4.44.0`
|
| 56 |
+
- `diffusers>=0.30.0`
|
| 57 |
+
- `sentencepiece>=0.1.99`
|
| 58 |
+
- `protobuf>=3.20.0`
|
| 59 |
+
|
| 60 |
+
## Still Having Issues?
|
| 61 |
+
|
| 62 |
+
Try this diagnostic:
|
| 63 |
+
|
| 64 |
+
```python
|
| 65 |
+
# Test if transformers is working
|
| 66 |
+
python -c "from transformers import T5Tokenizer; print('β
Transformers OK')"
|
| 67 |
+
|
| 68 |
+
# Test if diffusers is working
|
| 69 |
+
python -c "from diffusers import CogVideoXPipeline; print('β
Diffusers OK')"
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
If either fails, reinstall that specific package:
|
| 73 |
+
```bash
|
| 74 |
+
pip install --upgrade --force-reinstall transformers
|
| 75 |
+
pip install --upgrade --force-reinstall diffusers
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
---
|
| 79 |
+
|
| 80 |
+
**After fixing, restart the backend and it should work! π**
|
FREE_ALTERNATIVES.md
ADDED
|
@@ -0,0 +1,199 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π 100% Free Video Generation Alternatives
|
| 2 |
+
|
| 3 |
+
## Best Free Options (No Cost!)
|
| 4 |
+
|
| 5 |
+
### β
Option 1: Google Colab (Recommended - Completely Free!)
|
| 6 |
+
|
| 7 |
+
**Pros:**
|
| 8 |
+
- β
100% Free
|
| 9 |
+
- β
Free GPU access
|
| 10 |
+
- β
No credit card needed
|
| 11 |
+
- β
Run powerful models
|
| 12 |
+
|
| 13 |
+
**How to Use:**
|
| 14 |
+
|
| 15 |
+
1. **Open this Colab notebook**:
|
| 16 |
+
- CogVideoX: https://colab.research.google.com/github/camenduru/CogVideoX-colab
|
| 17 |
+
- Zeroscope: https://colab.research.google.com/github/camenduru/text-to-video-synthesis-colab
|
| 18 |
+
|
| 19 |
+
2. **Click "Run All"** (play button)
|
| 20 |
+
|
| 21 |
+
3. **Wait for setup** (2-3 minutes)
|
| 22 |
+
|
| 23 |
+
4. **Enter your prompt** in the notebook
|
| 24 |
+
|
| 25 |
+
5. **Generate video** - runs on Google's free GPU!
|
| 26 |
+
|
| 27 |
+
6. **Download the video**
|
| 28 |
+
|
| 29 |
+
**Models Available on Colab:**
|
| 30 |
+
- CogVideoX-5B
|
| 31 |
+
- CogVideoX-2B
|
| 32 |
+
- Zeroscope V2 XL
|
| 33 |
+
- AnimateDiff
|
| 34 |
+
- And more!
|
| 35 |
+
|
| 36 |
+
---
|
| 37 |
+
|
| 38 |
+
### β
Option 2: Hugging Face Inference API (Free Tier)
|
| 39 |
+
|
| 40 |
+
**Pros:**
|
| 41 |
+
- β
Free tier available
|
| 42 |
+
- β
No credit card for basic use
|
| 43 |
+
- β
Official API
|
| 44 |
+
- β
Reliable
|
| 45 |
+
|
| 46 |
+
**Setup:**
|
| 47 |
+
|
| 48 |
+
1. **Get Free HF Token:**
|
| 49 |
+
- Go to https://huggingface.co/settings/tokens
|
| 50 |
+
- Create a free account
|
| 51 |
+
- Generate a token (free tier)
|
| 52 |
+
|
| 53 |
+
2. **Use Inference API** (free quota):
|
| 54 |
+
```python
|
| 55 |
+
from huggingface_hub import InferenceClient
|
| 56 |
+
|
| 57 |
+
client = InferenceClient(token="your_free_token")
|
| 58 |
+
|
| 59 |
+
video = client.text_to_video(
|
| 60 |
+
"A dog running in a park",
|
| 61 |
+
model="THUDM/CogVideoX-2b"
|
| 62 |
+
)
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
**Free Quota:**
|
| 66 |
+
- Limited requests per month
|
| 67 |
+
- Slower than paid
|
| 68 |
+
- But completely free!
|
| 69 |
+
|
| 70 |
+
---
|
| 71 |
+
|
| 72 |
+
### β
Option 3: Local Generation (Your Own Computer)
|
| 73 |
+
|
| 74 |
+
**If you have a GPU (NVIDIA):**
|
| 75 |
+
|
| 76 |
+
1. **Install dependencies:**
|
| 77 |
+
```bash
|
| 78 |
+
pip install diffusers transformers accelerate
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
2. **Run locally:**
|
| 82 |
+
```python
|
| 83 |
+
from diffusers import CogVideoXPipeline
|
| 84 |
+
import torch
|
| 85 |
+
|
| 86 |
+
pipe = CogVideoXPipeline.from_pretrained(
|
| 87 |
+
"THUDM/CogVideoX-2b",
|
| 88 |
+
torch_dtype=torch.float16
|
| 89 |
+
)
|
| 90 |
+
pipe.to("cuda")
|
| 91 |
+
|
| 92 |
+
video = pipe("A dog running in a park").frames[0]
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
**Pros:**
|
| 96 |
+
- β
Completely free
|
| 97 |
+
- β
Unlimited generations
|
| 98 |
+
- β
No API limits
|
| 99 |
+
|
| 100 |
+
**Cons:**
|
| 101 |
+
- β Requires good GPU (8GB+ VRAM)
|
| 102 |
+
- β Slower on CPU
|
| 103 |
+
|
| 104 |
+
---
|
| 105 |
+
|
| 106 |
+
### β
Option 4: Free Hugging Face Spaces (When Available)
|
| 107 |
+
|
| 108 |
+
**These spaces are free but may be slow/sleeping:**
|
| 109 |
+
|
| 110 |
+
1. **CogVideoX-2B**: https://huggingface.co/spaces/THUDM/CogVideoX-2B
|
| 111 |
+
2. **CogVideoX-5B**: https://huggingface.co/spaces/zai-org/CogVideoX-5B-Space
|
| 112 |
+
3. **Stable Video Diffusion**: https://huggingface.co/spaces/multimodalart/stable-video-diffusion
|
| 113 |
+
|
| 114 |
+
**How to use:**
|
| 115 |
+
- Visit the space
|
| 116 |
+
- Enter prompt
|
| 117 |
+
- Click generate
|
| 118 |
+
- Wait (may take time if space is sleeping)
|
| 119 |
+
- Download video
|
| 120 |
+
|
| 121 |
+
---
|
| 122 |
+
|
| 123 |
+
## π― Recommended Free Workflow
|
| 124 |
+
|
| 125 |
+
### For Best Results (100% Free):
|
| 126 |
+
|
| 127 |
+
1. **Use Google Colab** for high-quality generations
|
| 128 |
+
- Best quality
|
| 129 |
+
- Free GPU
|
| 130 |
+
- No limits
|
| 131 |
+
|
| 132 |
+
2. **Use HF Spaces** for quick tests
|
| 133 |
+
- Instant (when awake)
|
| 134 |
+
- No setup needed
|
| 135 |
+
- May be slow
|
| 136 |
+
|
| 137 |
+
3. **Use Local** if you have GPU
|
| 138 |
+
- Unlimited
|
| 139 |
+
- Private
|
| 140 |
+
- Fast
|
| 141 |
+
|
| 142 |
+
---
|
| 143 |
+
|
| 144 |
+
## π Setting Up Google Colab Backend
|
| 145 |
+
|
| 146 |
+
I can create a backend that connects to your own Google Colab instance!
|
| 147 |
+
|
| 148 |
+
**How it works:**
|
| 149 |
+
1. Run a Colab notebook (free GPU)
|
| 150 |
+
2. Expose it via ngrok (free)
|
| 151 |
+
3. Connect your frontend to it
|
| 152 |
+
4. Generate unlimited videos!
|
| 153 |
+
|
| 154 |
+
**Want me to set this up?**
|
| 155 |
+
|
| 156 |
+
---
|
| 157 |
+
|
| 158 |
+
## π Comparison
|
| 159 |
+
|
| 160 |
+
| Method | Cost | Speed | Quality | Limits |
|
| 161 |
+
|--------|------|-------|---------|--------|
|
| 162 |
+
| Google Colab | FREE | ββββ | βββββ | Session time |
|
| 163 |
+
| HF Spaces | FREE | ββ | ββββ | May sleep |
|
| 164 |
+
| HF Inference API | FREE | βββ | ββββ | Monthly quota |
|
| 165 |
+
| Local GPU | FREE | βββββ | βββββ | None |
|
| 166 |
+
| Replicate | $0.10/video | βββββ | βββββ | Pay per use |
|
| 167 |
+
|
| 168 |
+
---
|
| 169 |
+
|
| 170 |
+
## π¬ Quick Start: Google Colab (5 minutes)
|
| 171 |
+
|
| 172 |
+
1. **Open**: https://colab.research.google.com/github/camenduru/CogVideoX-colab
|
| 173 |
+
|
| 174 |
+
2. **Click**: Runtime β Run all
|
| 175 |
+
|
| 176 |
+
3. **Wait**: 2-3 minutes for setup
|
| 177 |
+
|
| 178 |
+
4. **Scroll down** to the generation cell
|
| 179 |
+
|
| 180 |
+
5. **Enter your prompt**
|
| 181 |
+
|
| 182 |
+
6. **Run the cell**
|
| 183 |
+
|
| 184 |
+
7. **Download your video!**
|
| 185 |
+
|
| 186 |
+
**That's it! Completely free, no credit card, unlimited use!** π
|
| 187 |
+
|
| 188 |
+
---
|
| 189 |
+
|
| 190 |
+
## π‘ Best Free Option for Your App
|
| 191 |
+
|
| 192 |
+
I recommend creating a **Google Colab + ngrok** backend:
|
| 193 |
+
|
| 194 |
+
1. Run model on Colab (free GPU)
|
| 195 |
+
2. Expose via ngrok (free tunnel)
|
| 196 |
+
3. Connect your frontend
|
| 197 |
+
4. Generate unlimited videos!
|
| 198 |
+
|
| 199 |
+
**Want me to create this setup for you?** It's 100% free and will work with your existing frontend!
|
GITHUB_PUSH_INSTRUCTIONS.md
ADDED
|
@@ -0,0 +1,163 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π€ GitHub Push Instructions
|
| 2 |
+
|
| 3 |
+
## β
What's Ready
|
| 4 |
+
|
| 5 |
+
Your project is now ready to push to GitHub with:
|
| 6 |
+
|
| 7 |
+
- β
Complete documentation (README_GITHUB.md)
|
| 8 |
+
- β
Setup guide (SETUP.md)
|
| 9 |
+
- β
Multiple backend options (Replicate, Local, HF Spaces)
|
| 10 |
+
- β
Enhanced UI with camera controls
|
| 11 |
+
- β
Proper .gitignore (excludes .env, logs, videos)
|
| 12 |
+
- β
Example environment file (.env.example)
|
| 13 |
+
- β
All dependencies listed (requirements.txt, requirements_local.txt)
|
| 14 |
+
|
| 15 |
+
## π Push to GitHub
|
| 16 |
+
|
| 17 |
+
### If you already have a GitHub repository:
|
| 18 |
+
|
| 19 |
+
```bash
|
| 20 |
+
cd /Users/sravyalu/VideoAI/hailuo-clone
|
| 21 |
+
|
| 22 |
+
# Push to your existing repository
|
| 23 |
+
git push origin main
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
### If you need to create a new GitHub repository:
|
| 27 |
+
|
| 28 |
+
1. **Create Repository on GitHub**
|
| 29 |
+
- Go to https://github.com/new
|
| 30 |
+
- Repository name: `ai-video-generator` (or your choice)
|
| 31 |
+
- Description: "AI Video Generator with multiple backend options - Hailuo Clone"
|
| 32 |
+
- Make it Public or Private
|
| 33 |
+
- **Don't** initialize with README (you already have one)
|
| 34 |
+
- Click "Create repository"
|
| 35 |
+
|
| 36 |
+
2. **Connect and Push**
|
| 37 |
+
```bash
|
| 38 |
+
cd /Users/sravyalu/VideoAI/hailuo-clone
|
| 39 |
+
|
| 40 |
+
# If you need to set the remote (only if not already set)
|
| 41 |
+
git remote add origin https://github.com/YOUR_USERNAME/YOUR_REPO_NAME.git
|
| 42 |
+
|
| 43 |
+
# Or if remote exists, update it
|
| 44 |
+
git remote set-url origin https://github.com/YOUR_USERNAME/YOUR_REPO_NAME.git
|
| 45 |
+
|
| 46 |
+
# Push to GitHub
|
| 47 |
+
git push -u origin main
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
## π Before Pushing - Final Checklist
|
| 51 |
+
|
| 52 |
+
- [ ] Remove any API tokens from .env (already in .gitignore)
|
| 53 |
+
- [ ] Verify .env.example doesn't contain real tokens
|
| 54 |
+
- [ ] Check that generated videos aren't included (in .gitignore)
|
| 55 |
+
- [ ] Update README_GITHUB.md with your actual repo URL
|
| 56 |
+
- [ ] Add your contact email in README_GITHUB.md
|
| 57 |
+
|
| 58 |
+
## π Security Check
|
| 59 |
+
|
| 60 |
+
Run this to make sure no secrets are committed:
|
| 61 |
+
|
| 62 |
+
```bash
|
| 63 |
+
# Check what will be pushed
|
| 64 |
+
git log --oneline -5
|
| 65 |
+
|
| 66 |
+
# Verify .env is not tracked
|
| 67 |
+
git ls-files | grep .env
|
| 68 |
+
|
| 69 |
+
# Should only show .env.example, NOT .env
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
## π Recommended GitHub Repository Settings
|
| 73 |
+
|
| 74 |
+
After pushing:
|
| 75 |
+
|
| 76 |
+
1. **Add Topics** (in GitHub repo settings):
|
| 77 |
+
- `ai`
|
| 78 |
+
- `video-generation`
|
| 79 |
+
- `text-to-video`
|
| 80 |
+
- `hailuo`
|
| 81 |
+
- `cogvideox`
|
| 82 |
+
- `python`
|
| 83 |
+
- `flask`
|
| 84 |
+
|
| 85 |
+
2. **Add Description**:
|
| 86 |
+
"AI Video Generator with multiple backend options - Generate videos from text using CogVideoX, Hailuo, and more"
|
| 87 |
+
|
| 88 |
+
3. **Enable Issues** (for bug reports and feature requests)
|
| 89 |
+
|
| 90 |
+
4. **Add License** (MIT recommended)
|
| 91 |
+
|
| 92 |
+
5. **Create Release** (optional):
|
| 93 |
+
- Tag: `v1.0.0`
|
| 94 |
+
- Title: "Initial Release"
|
| 95 |
+
- Description: "First stable release with Replicate API, Local, and HF Spaces support"
|
| 96 |
+
|
| 97 |
+
## π― After Pushing
|
| 98 |
+
|
| 99 |
+
1. **Update README**
|
| 100 |
+
- Replace `<your-repo-url>` with actual GitHub URL
|
| 101 |
+
- Replace `your-email@example.com` with your email
|
| 102 |
+
|
| 103 |
+
2. **Test Clone**
|
| 104 |
+
```bash
|
| 105 |
+
# Test that others can clone and use it
|
| 106 |
+
cd /tmp
|
| 107 |
+
git clone https://github.com/YOUR_USERNAME/YOUR_REPO_NAME.git
|
| 108 |
+
cd YOUR_REPO_NAME
|
| 109 |
+
# Follow SETUP.md instructions
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
3. **Share**
|
| 113 |
+
- Share on Reddit: r/MachineLearning, r/StableDiffusion
|
| 114 |
+
- Share on Twitter/X with #AIVideo #TextToVideo
|
| 115 |
+
- Share on LinkedIn
|
| 116 |
+
|
| 117 |
+
## π Repository Structure (What's Being Pushed)
|
| 118 |
+
|
| 119 |
+
```
|
| 120 |
+
hailuo-clone/
|
| 121 |
+
βββ π README_GITHUB.md # Main documentation
|
| 122 |
+
βββ π SETUP.md # Installation guide
|
| 123 |
+
βββ π SOLUTION_GUIDE.md # Troubleshooting
|
| 124 |
+
βββ π requirements.txt # Python dependencies
|
| 125 |
+
βββ π requirements_local.txt # Local generation deps
|
| 126 |
+
βββ π .env.example # Environment template
|
| 127 |
+
βββ π .gitignore # Git ignore rules
|
| 128 |
+
β
|
| 129 |
+
βββ π Backend Files
|
| 130 |
+
β βββ backend_replicate.py # Replicate API (recommended)
|
| 131 |
+
β βββ backend_local.py # Local generation
|
| 132 |
+
β βββ backend_enhanced.py # HF Spaces
|
| 133 |
+
β βββ backend_simple.py # Demo mode
|
| 134 |
+
β βββ models_config.py # Model configurations
|
| 135 |
+
β
|
| 136 |
+
βββ π Frontend Files
|
| 137 |
+
β βββ index.html # Simple UI
|
| 138 |
+
β βββ index_enhanced.html # Advanced UI
|
| 139 |
+
β βββ index_local.html # Local UI
|
| 140 |
+
β
|
| 141 |
+
βββ π Documentation
|
| 142 |
+
βββ README_LOCAL.md
|
| 143 |
+
βββ REPLICATE_SETUP.md
|
| 144 |
+
βββ QUICKSTART_LOCAL.md
|
| 145 |
+
```
|
| 146 |
+
|
| 147 |
+
## β¨ What's NOT Being Pushed (Protected by .gitignore)
|
| 148 |
+
|
| 149 |
+
- β .env (your API tokens)
|
| 150 |
+
- β *.log (log files)
|
| 151 |
+
- β generated_videos/ (output videos)
|
| 152 |
+
- β __pycache__/ (Python cache)
|
| 153 |
+
- β .venv/ (virtual environment)
|
| 154 |
+
|
| 155 |
+
## π Ready to Push!
|
| 156 |
+
|
| 157 |
+
Your project is production-ready. Just run:
|
| 158 |
+
|
| 159 |
+
```bash
|
| 160 |
+
git push origin main
|
| 161 |
+
```
|
| 162 |
+
|
| 163 |
+
Then share your amazing AI video generator with the world! π
|
LONGER_VIDEOS.md
ADDED
|
@@ -0,0 +1,236 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π¬ Longer Video Generation Models
|
| 2 |
+
|
| 3 |
+
## Available Models by Duration
|
| 4 |
+
|
| 5 |
+
### π₯ Longest Videos (10+ seconds)
|
| 6 |
+
|
| 7 |
+
#### 1. **Runway Gen-3** - Up to 10 seconds β
|
| 8 |
+
- **Model ID:** `runway`
|
| 9 |
+
- **Replicate:** `stability-ai/stable-video-diffusion-img2vid-xt`
|
| 10 |
+
- **Duration:** Up to 10 seconds
|
| 11 |
+
- **Quality:** Professional, cinematic
|
| 12 |
+
- **Cost:** ~$0.15-0.25 per video
|
| 13 |
+
- **Best for:** Professional content, commercials, high-quality shorts
|
| 14 |
+
|
| 15 |
+
#### 2. **Kling AI** - Up to 10 seconds
|
| 16 |
+
- **Not yet on Replicate** (Chinese model)
|
| 17 |
+
- **Duration:** Up to 10 seconds
|
| 18 |
+
- **Quality:** Very high, comparable to Hailuo
|
| 19 |
+
- **Best for:** Realistic scenes, character animations
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
### π₯ Medium Length (5-7 seconds)
|
| 24 |
+
|
| 25 |
+
#### 3. **Hailuo Video-01** - 6 seconds
|
| 26 |
+
- **Model ID:** `hailuo`
|
| 27 |
+
- **Replicate:** `minimax/video-01`
|
| 28 |
+
- **Duration:** 6 seconds
|
| 29 |
+
- **Quality:** Excellent
|
| 30 |
+
- **Cost:** ~$0.05-0.10 per video
|
| 31 |
+
- **Best for:** General purpose, high quality
|
| 32 |
+
|
| 33 |
+
#### 4. **CogVideoX-5B** - 6 seconds
|
| 34 |
+
- **Model ID:** `cogvideox`
|
| 35 |
+
- **Replicate:** `lucataco/cogvideox-5b`
|
| 36 |
+
- **Duration:** 6 seconds (49 frames at 8fps)
|
| 37 |
+
- **Quality:** High
|
| 38 |
+
- **Cost:** ~$0.05 per video
|
| 39 |
+
- **Best for:** Good balance of quality and cost
|
| 40 |
+
|
| 41 |
+
#### 5. **HunyuanVideo** - 5+ seconds
|
| 42 |
+
- **Model ID:** `hunyuan`
|
| 43 |
+
- **Replicate:** `tencent/hunyuan-video`
|
| 44 |
+
- **Duration:** 5+ seconds
|
| 45 |
+
- **Quality:** State-of-the-art
|
| 46 |
+
- **Cost:** ~$0.08-0.12 per video
|
| 47 |
+
- **Best for:** High-quality, smooth motion
|
| 48 |
+
|
| 49 |
+
#### 6. **Luma Dream Machine** - 5 seconds
|
| 50 |
+
- **Model ID:** `luma`
|
| 51 |
+
- **Replicate:** `fofr/dream-machine`
|
| 52 |
+
- **Duration:** 5 seconds
|
| 53 |
+
- **Quality:** Cinematic
|
| 54 |
+
- **Cost:** ~$0.10 per video
|
| 55 |
+
- **Best for:** Cinematic shots, smooth camera movements
|
| 56 |
+
|
| 57 |
+
---
|
| 58 |
+
|
| 59 |
+
### π₯ Shorter Videos (3-4 seconds)
|
| 60 |
+
|
| 61 |
+
#### 7. **Pika Labs** - 3 seconds (extendable)
|
| 62 |
+
- Can extend videos by generating continuations
|
| 63 |
+
- Good for creating longer sequences
|
| 64 |
+
|
| 65 |
+
#### 8. **AnimateDiff** - 2-4 seconds
|
| 66 |
+
- Fast generation
|
| 67 |
+
- Lower cost
|
| 68 |
+
- Good for quick tests
|
| 69 |
+
|
| 70 |
+
---
|
| 71 |
+
|
| 72 |
+
## π Comparison Table
|
| 73 |
+
|
| 74 |
+
| Model | Duration | Quality | Cost/Video | Speed | Best For |
|
| 75 |
+
|-------|----------|---------|------------|-------|----------|
|
| 76 |
+
| **Runway Gen-3** | 10s | βββββ | $0.15-0.25 | Slow | Professional |
|
| 77 |
+
| **Hailuo** | 6s | βββββ | $0.05-0.10 | Medium | General |
|
| 78 |
+
| **CogVideoX-5B** | 6s | ββββ | $0.05 | Medium | Balanced |
|
| 79 |
+
| **HunyuanVideo** | 5s+ | βββββ | $0.08-0.12 | Medium | High quality |
|
| 80 |
+
| **Luma** | 5s | ββββ | $0.10 | Medium | Cinematic |
|
| 81 |
+
| **Pika** | 3s | βββ | $0.03 | Fast | Quick tests |
|
| 82 |
+
|
| 83 |
+
---
|
| 84 |
+
|
| 85 |
+
## π― How to Use Longer Video Models
|
| 86 |
+
|
| 87 |
+
### In Your App
|
| 88 |
+
|
| 89 |
+
The models are now available in your dropdown:
|
| 90 |
+
|
| 91 |
+
1. **Via Web UI:**
|
| 92 |
+
- Select "Runway Gen-3 - 10s β" for longest videos
|
| 93 |
+
- Select "Hailuo Video-01 - 6s" for good balance
|
| 94 |
+
- Enter your prompt and generate!
|
| 95 |
+
|
| 96 |
+
2. **Via API:**
|
| 97 |
+
```bash
|
| 98 |
+
curl -X POST https://your-app.vercel.app/api/generate-video \
|
| 99 |
+
-H "Content-Type: application/json" \
|
| 100 |
+
-d '{
|
| 101 |
+
"prompt": "A golden retriever running through flowers",
|
| 102 |
+
"model": "runway"
|
| 103 |
+
}'
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
3. **Via Python Backend:**
|
| 107 |
+
```python
|
| 108 |
+
import requests
|
| 109 |
+
|
| 110 |
+
response = requests.post('http://localhost:5000/generate-video', json={
|
| 111 |
+
'prompt': 'Ocean waves at sunset',
|
| 112 |
+
'model': 'runway' # For 10s videos
|
| 113 |
+
})
|
| 114 |
+
|
| 115 |
+
video_url = response.json()['video_url']
|
| 116 |
+
```
|
| 117 |
+
|
| 118 |
+
---
|
| 119 |
+
|
| 120 |
+
## π‘ Tips for Longer Videos
|
| 121 |
+
|
| 122 |
+
### 1. **Be More Specific**
|
| 123 |
+
Longer videos need more detailed prompts:
|
| 124 |
+
```
|
| 125 |
+
β Bad: "A dog running"
|
| 126 |
+
β
Good: "A golden retriever running through a field of sunflowers,
|
| 127 |
+
camera tracking from the side, golden hour lighting,
|
| 128 |
+
slow motion, cinematic"
|
| 129 |
+
```
|
| 130 |
+
|
| 131 |
+
### 2. **Use Camera Movements**
|
| 132 |
+
Longer videos benefit from camera direction:
|
| 133 |
+
- "Camera slowly zooms in"
|
| 134 |
+
- "Tracking shot following the subject"
|
| 135 |
+
- "Pan from left to right"
|
| 136 |
+
|
| 137 |
+
### 3. **Specify Pacing**
|
| 138 |
+
- "Slow motion"
|
| 139 |
+
- "Time-lapse"
|
| 140 |
+
- "Smooth, continuous motion"
|
| 141 |
+
|
| 142 |
+
### 4. **Consider Cost**
|
| 143 |
+
- Runway (10s): ~$0.20 per video
|
| 144 |
+
- Hailuo (6s): ~$0.08 per video
|
| 145 |
+
- Test with shorter/cheaper models first
|
| 146 |
+
|
| 147 |
+
---
|
| 148 |
+
|
| 149 |
+
## π Future: Even Longer Videos
|
| 150 |
+
|
| 151 |
+
### Coming Soon:
|
| 152 |
+
- **Sora (OpenAI)** - Up to 60 seconds (not yet public)
|
| 153 |
+
- **Kling 1.5** - Extended duration modes
|
| 154 |
+
- **Runway Gen-4** - Longer outputs expected
|
| 155 |
+
|
| 156 |
+
### Workarounds for Now:
|
| 157 |
+
1. **Video Extension:** Generate multiple clips and stitch
|
| 158 |
+
2. **Loop Seamlessly:** Create loopable content
|
| 159 |
+
3. **Multi-shot:** Generate scenes separately, edit together
|
| 160 |
+
|
| 161 |
+
---
|
| 162 |
+
|
| 163 |
+
## π§ Technical Details
|
| 164 |
+
|
| 165 |
+
### Why Are Videos Short?
|
| 166 |
+
|
| 167 |
+
1. **Computational Cost:** Each frame requires significant GPU time
|
| 168 |
+
2. **Memory Requirements:** Longer videos = more VRAM needed
|
| 169 |
+
3. **Quality Trade-off:** Maintaining quality over time is hard
|
| 170 |
+
4. **Training Data:** Most training videos are short clips
|
| 171 |
+
|
| 172 |
+
### Frame Rates:
|
| 173 |
+
- **CogVideoX:** 49 frames at 8 fps = 6.1 seconds
|
| 174 |
+
- **Hailuo:** 48 frames at 8 fps = 6 seconds
|
| 175 |
+
- **Runway:** 80 frames at 8 fps = 10 seconds
|
| 176 |
+
|
| 177 |
+
---
|
| 178 |
+
|
| 179 |
+
## π° Cost Optimization
|
| 180 |
+
|
| 181 |
+
### For Budget-Conscious Users:
|
| 182 |
+
|
| 183 |
+
1. **Start with CogVideoX** ($0.05) or **Hailuo** ($0.08)
|
| 184 |
+
2. **Use Runway** ($0.20) only for final/important videos
|
| 185 |
+
3. **Test prompts** with shorter models first
|
| 186 |
+
4. **Batch generate** to save on setup costs
|
| 187 |
+
|
| 188 |
+
### For Quality-First:
|
| 189 |
+
|
| 190 |
+
1. **Use Runway** for professional work
|
| 191 |
+
2. **HunyuanVideo** for high-quality content
|
| 192 |
+
3. **Hailuo** for general high-quality needs
|
| 193 |
+
|
| 194 |
+
---
|
| 195 |
+
|
| 196 |
+
## π Example Prompts for Longer Videos
|
| 197 |
+
|
| 198 |
+
### 10-Second Videos (Runway):
|
| 199 |
+
```
|
| 200 |
+
"A surfer riding a massive wave, camera following from behind,
|
| 201 |
+
slow motion, golden hour, spray and mist, cinematic 4k"
|
| 202 |
+
|
| 203 |
+
"City street time-lapse from day to night, cars and people
|
| 204 |
+
moving fast, lights turning on, smooth transition"
|
| 205 |
+
|
| 206 |
+
"A bird taking flight from a branch, slow motion, camera
|
| 207 |
+
tracking upward following the bird into the sky"
|
| 208 |
+
```
|
| 209 |
+
|
| 210 |
+
### 6-Second Videos (Hailuo/CogVideoX):
|
| 211 |
+
```
|
| 212 |
+
"A cat jumping onto a windowsill, looking outside at falling
|
| 213 |
+
snow, soft lighting, cozy atmosphere"
|
| 214 |
+
|
| 215 |
+
"Fireworks exploding over a city skyline at night, colorful
|
| 216 |
+
bursts, reflection on water below"
|
| 217 |
+
|
| 218 |
+
"A coffee cup being filled, steam rising, close-up shot,
|
| 219 |
+
morning light through window"
|
| 220 |
+
```
|
| 221 |
+
|
| 222 |
+
---
|
| 223 |
+
|
| 224 |
+
## β
Updated in Your Project
|
| 225 |
+
|
| 226 |
+
I've updated:
|
| 227 |
+
- β
`api/generate-video.js` - Added all longer video models
|
| 228 |
+
- β
`api/models.js` - Listed models with durations
|
| 229 |
+
- β
`backend_replicate.py` - Added model support
|
| 230 |
+
- β
UI will now show duration in model names
|
| 231 |
+
|
| 232 |
+
**Commit and deploy to use the new models!**
|
| 233 |
+
|
| 234 |
+
---
|
| 235 |
+
|
| 236 |
+
**For the longest videos (10s), use `model: "runway"` in your requests! π¬β¨**
|
ONLINE_VS_LOCAL.md
ADDED
|
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π Online Models vs Local Models
|
| 2 |
+
|
| 3 |
+
## β
Now Using: Online Models (RECOMMENDED)
|
| 4 |
+
|
| 5 |
+
You're now using **backend_enhanced.py** which connects to **free Hugging Face Spaces**!
|
| 6 |
+
|
| 7 |
+
### π Available Online Models:
|
| 8 |
+
|
| 9 |
+
1. **CogVideoX-5B** (Best Quality) β
|
| 10 |
+
- 6 seconds, 720p
|
| 11 |
+
- Generation time: 30-60 seconds
|
| 12 |
+
- Free on Hugging Face Zero GPU
|
| 13 |
+
|
| 14 |
+
2. **CogVideoX-2B** (Faster)
|
| 15 |
+
- 6 seconds, 720p
|
| 16 |
+
- Generation time: 20-40 seconds
|
| 17 |
+
- Good quality, faster
|
| 18 |
+
|
| 19 |
+
3. **HunyuanVideo** (Tencent - SOTA)
|
| 20 |
+
- State-of-the-art quality
|
| 21 |
+
- Longer videos possible
|
| 22 |
+
- Generation time: 60-90 seconds
|
| 23 |
+
|
| 24 |
+
4. **Demo Mode** (Instant Testing)
|
| 25 |
+
- Returns sample video immediately
|
| 26 |
+
- Perfect for testing the UI
|
| 27 |
+
|
| 28 |
+
### β¨ Advantages of Online Models:
|
| 29 |
+
|
| 30 |
+
β
**No downloads** - No 5GB model to download
|
| 31 |
+
β
**Fast** - 30-60 seconds per video (vs 5-10 minutes on CPU)
|
| 32 |
+
β
**No GPU needed** - Runs on Hugging Face's servers
|
| 33 |
+
β
**Better quality** - Access to larger, better models
|
| 34 |
+
β
**Multiple models** - Switch between different AI models
|
| 35 |
+
β
**Camera controls** - Hailuo-inspired camera movements
|
| 36 |
+
β
**Visual effects** - Cinematic, dramatic, slow-motion, etc.
|
| 37 |
+
|
| 38 |
+
### β οΈ Limitations:
|
| 39 |
+
|
| 40 |
+
- Requires internet connection
|
| 41 |
+
- May have queue times if many people are using it
|
| 42 |
+
- Some spaces may be sleeping (takes 30s to wake up)
|
| 43 |
+
|
| 44 |
+
## π₯οΈ Local Model (backend_local.py)
|
| 45 |
+
|
| 46 |
+
### When to Use Local:
|
| 47 |
+
|
| 48 |
+
- β
You have a powerful NVIDIA GPU (RTX 3060+)
|
| 49 |
+
- β
You want 100% privacy (no data sent to cloud)
|
| 50 |
+
- β
You don't have reliable internet
|
| 51 |
+
- β
You want to run custom models
|
| 52 |
+
|
| 53 |
+
### Disadvantages:
|
| 54 |
+
|
| 55 |
+
- β 5GB download required
|
| 56 |
+
- β Very slow on CPU (5-10 minutes per video)
|
| 57 |
+
- β Requires 16GB+ RAM
|
| 58 |
+
- β Only one model available
|
| 59 |
+
- β No advanced features
|
| 60 |
+
|
| 61 |
+
## π Comparison Table
|
| 62 |
+
|
| 63 |
+
| Feature | Online (Enhanced) | Local |
|
| 64 |
+
|---------|------------------|-------|
|
| 65 |
+
| **Setup Time** | Instant | 10-30 minutes |
|
| 66 |
+
| **Download Size** | 0 GB | 5 GB |
|
| 67 |
+
| **Generation Speed** | 30-60 sec | 5-10 min (CPU) |
|
| 68 |
+
| **Quality** | Excellent | Good |
|
| 69 |
+
| **Models Available** | 4+ models | 1 model |
|
| 70 |
+
| **Camera Controls** | β
Yes | β No |
|
| 71 |
+
| **Visual Effects** | β
Yes | β No |
|
| 72 |
+
| **GPU Required** | β No | β οΈ Recommended |
|
| 73 |
+
| **Internet Required** | β
Yes | β No |
|
| 74 |
+
| **Privacy** | Cloud-based | 100% local |
|
| 75 |
+
| **Cost** | Free | Free |
|
| 76 |
+
|
| 77 |
+
## π― Recommendation
|
| 78 |
+
|
| 79 |
+
**Use Online Models (backend_enhanced.py)** unless you specifically need:
|
| 80 |
+
- Complete privacy
|
| 81 |
+
- Offline generation
|
| 82 |
+
- Custom model training
|
| 83 |
+
|
| 84 |
+
## π How to Switch
|
| 85 |
+
|
| 86 |
+
### To Online (Current):
|
| 87 |
+
```bash
|
| 88 |
+
/Users/sravyalu/VideoAI/.venv/bin/python backend_enhanced.py
|
| 89 |
+
# Open: index_enhanced.html
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
+
### To Local:
|
| 93 |
+
```bash
|
| 94 |
+
/Users/sravyalu/VideoAI/.venv/bin/python backend_local.py
|
| 95 |
+
# Open: index_local.html
|
| 96 |
+
```
|
| 97 |
+
|
| 98 |
+
---
|
| 99 |
+
|
| 100 |
+
**You're all set with online models! Much faster and easier! π**
|
PROJECT_SUMMARY.md
ADDED
|
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π Project Complete - AI Video Generator
|
| 2 |
+
|
| 3 |
+
## β
Successfully Pushed to GitHub!
|
| 4 |
+
|
| 5 |
+
**Repository:** https://github.com/LakshmiSravya123/VideoAI
|
| 6 |
+
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
## π― What You Have
|
| 10 |
+
|
| 11 |
+
### Working Features
|
| 12 |
+
- β
**3 Backend Options**
|
| 13 |
+
- Replicate API (recommended, most reliable)
|
| 14 |
+
- Local Generation (free, private, offline)
|
| 15 |
+
- HuggingFace Spaces (free but unreliable)
|
| 16 |
+
|
| 17 |
+
- β
**3 User Interfaces**
|
| 18 |
+
- `index.html` - Simple, works with all backends
|
| 19 |
+
- `index_enhanced.html` - Advanced with camera controls β
|
| 20 |
+
- `index_local.html` - Optimized for local generation
|
| 21 |
+
|
| 22 |
+
- β
**Advanced Features**
|
| 23 |
+
- Camera movements (zoom, pan, tilt, tracking)
|
| 24 |
+
- Visual effects (cinematic, dramatic, slow-motion)
|
| 25 |
+
- Video styles (realistic, anime, 3D render)
|
| 26 |
+
- Multiple AI models (CogVideoX, Hailuo, HunyuanVideo)
|
| 27 |
+
|
| 28 |
+
### Documentation
|
| 29 |
+
- β
README_GITHUB.md - Complete project documentation
|
| 30 |
+
- β
SETUP.md - Step-by-step installation guide
|
| 31 |
+
- β
SOLUTION_GUIDE.md - Troubleshooting guide
|
| 32 |
+
- β
README_LOCAL.md - Local generation guide
|
| 33 |
+
- β
GITHUB_PUSH_INSTRUCTIONS.md - Push guide
|
| 34 |
+
|
| 35 |
+
---
|
| 36 |
+
|
| 37 |
+
## π Current Working Setup
|
| 38 |
+
|
| 39 |
+
**Backend:** Replicate API (`backend_replicate.py`)
|
| 40 |
+
**Frontend:** `index.html` or `index_enhanced.html`
|
| 41 |
+
**Status:** β
Working perfectly!
|
| 42 |
+
|
| 43 |
+
### Quick Start Commands
|
| 44 |
+
|
| 45 |
+
```bash
|
| 46 |
+
# Start the backend
|
| 47 |
+
cd /Users/sravyalu/VideoAI/hailuo-clone
|
| 48 |
+
source /Users/sravyalu/VideoAI/.venv/bin/activate
|
| 49 |
+
python backend_replicate.py
|
| 50 |
+
|
| 51 |
+
# Open in browser
|
| 52 |
+
open index_enhanced.html
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
---
|
| 56 |
+
|
| 57 |
+
## π¬ Available Models
|
| 58 |
+
|
| 59 |
+
### Via Replicate API (Current)
|
| 60 |
+
1. **Hailuo Video-01** (MiniMax) - The real Hailuo model! π₯
|
| 61 |
+
- Cost: ~$0.05-0.10 per video
|
| 62 |
+
- Speed: 30-60 seconds
|
| 63 |
+
- Quality: Excellent
|
| 64 |
+
|
| 65 |
+
2. **CogVideoX-5B**
|
| 66 |
+
- Cost: ~$0.05 per video
|
| 67 |
+
- Speed: 30-60 seconds
|
| 68 |
+
- Quality: High
|
| 69 |
+
|
| 70 |
+
### Via Local Generation
|
| 71 |
+
1. **CogVideoX-2B**
|
| 72 |
+
- Cost: Free
|
| 73 |
+
- Speed: 30-120s (GPU) or 5-10min (CPU)
|
| 74 |
+
- Quality: Good
|
| 75 |
+
|
| 76 |
+
---
|
| 77 |
+
|
| 78 |
+
## π What Was Fixed
|
| 79 |
+
|
| 80 |
+
### Issues Resolved
|
| 81 |
+
1. β
"Model provider unreachable" - Switched to Replicate API
|
| 82 |
+
2. β
Slow local generation - Added Replicate as fast alternative
|
| 83 |
+
3. β
HuggingFace Spaces unreliable - Created fallback options
|
| 84 |
+
4. β
Missing dependencies - Created comprehensive requirements files
|
| 85 |
+
5. β
Transformers compatibility - Fixed version conflicts
|
| 86 |
+
6. β
Port conflicts - Added process management
|
| 87 |
+
|
| 88 |
+
### Files Created/Updated
|
| 89 |
+
- Created `backend_replicate.py` - Reliable Replicate API backend
|
| 90 |
+
- Created `backend_simple.py` - Demo mode for testing
|
| 91 |
+
- Updated `index_enhanced.html` - Advanced UI with all features
|
| 92 |
+
- Created `models_config.py` - Centralized model configuration
|
| 93 |
+
- Updated `.gitignore` - Proper exclusions for GitHub
|
| 94 |
+
- Created comprehensive documentation
|
| 95 |
+
|
| 96 |
+
---
|
| 97 |
+
|
| 98 |
+
## π― Next Steps
|
| 99 |
+
|
| 100 |
+
### Immediate
|
| 101 |
+
1. β
Test the enhanced UI with Replicate backend
|
| 102 |
+
2. β
Generate a few test videos
|
| 103 |
+
3. β
Share your GitHub repo
|
| 104 |
+
|
| 105 |
+
### Optional Enhancements
|
| 106 |
+
- [ ] Add video history/gallery feature
|
| 107 |
+
- [ ] Add batch generation (multiple prompts)
|
| 108 |
+
- [ ] Add video editing features (trim, merge)
|
| 109 |
+
- [ ] Add more AI models (Stable Video Diffusion, etc.)
|
| 110 |
+
- [ ] Create Docker container for easy deployment
|
| 111 |
+
- [ ] Add user authentication
|
| 112 |
+
- [ ] Add video download queue
|
| 113 |
+
- [ ] Create mobile-responsive design
|
| 114 |
+
|
| 115 |
+
### Sharing
|
| 116 |
+
- [ ] Share on Reddit (r/MachineLearning, r/StableDiffusion)
|
| 117 |
+
- [ ] Share on Twitter/X with #AIVideo #TextToVideo
|
| 118 |
+
- [ ] Share on LinkedIn
|
| 119 |
+
- [ ] Create demo video and upload to YouTube
|
| 120 |
+
- [ ] Write a blog post about the project
|
| 121 |
+
|
| 122 |
+
---
|
| 123 |
+
|
| 124 |
+
## π‘ Usage Tips
|
| 125 |
+
|
| 126 |
+
### For Best Results
|
| 127 |
+
1. **Be Descriptive** - Include details about lighting, camera angle, movement
|
| 128 |
+
2. **Use Camera Controls** - Try zoom in, pan, tracking shots
|
| 129 |
+
3. **Add Visual Effects** - Cinematic lighting, slow motion, etc.
|
| 130 |
+
4. **Experiment with Styles** - Realistic, anime, 3D render
|
| 131 |
+
5. **Keep Prompts Focused** - One main subject or action
|
| 132 |
+
|
| 133 |
+
### Example Prompts
|
| 134 |
+
```
|
| 135 |
+
"A golden retriever running through a field of flowers at sunset,
|
| 136 |
+
cinematic lighting, aerial view, slow motion"
|
| 137 |
+
|
| 138 |
+
"Ocean waves crashing on a rocky shore, dramatic lighting,
|
| 139 |
+
zoom in, golden hour"
|
| 140 |
+
|
| 141 |
+
"A cat playing with a ball of yarn, close-up shot,
|
| 142 |
+
soft lighting, photorealistic"
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
---
|
| 146 |
+
|
| 147 |
+
## π Cost Estimation (Replicate)
|
| 148 |
+
|
| 149 |
+
- Single video: ~$0.05-0.10
|
| 150 |
+
- 10 videos: ~$0.50-1.00
|
| 151 |
+
- 100 videos: ~$5.00-10.00
|
| 152 |
+
|
| 153 |
+
**Note:** Replicate gives free credits for new accounts!
|
| 154 |
+
|
| 155 |
+
---
|
| 156 |
+
|
| 157 |
+
## π Security Reminders
|
| 158 |
+
|
| 159 |
+
- β
.env file is in .gitignore (API tokens safe)
|
| 160 |
+
- β
Generated videos excluded from git
|
| 161 |
+
- β
Log files excluded from git
|
| 162 |
+
- β
.env.example provided for others
|
| 163 |
+
|
| 164 |
+
**Never commit:**
|
| 165 |
+
- Real API tokens
|
| 166 |
+
- Personal videos
|
| 167 |
+
- Log files with sensitive data
|
| 168 |
+
|
| 169 |
+
---
|
| 170 |
+
|
| 171 |
+
## π Congratulations!
|
| 172 |
+
|
| 173 |
+
You now have a **production-ready AI video generator** with:
|
| 174 |
+
- Multiple backend options
|
| 175 |
+
- Advanced UI features
|
| 176 |
+
- Complete documentation
|
| 177 |
+
- GitHub repository
|
| 178 |
+
- Working Replicate API integration
|
| 179 |
+
|
| 180 |
+
**Your project is live at:**
|
| 181 |
+
https://github.com/LakshmiSravya123/VideoAI
|
| 182 |
+
|
| 183 |
+
---
|
| 184 |
+
|
| 185 |
+
## π Support
|
| 186 |
+
|
| 187 |
+
If you need help:
|
| 188 |
+
1. Check SOLUTION_GUIDE.md
|
| 189 |
+
2. Check GitHub Issues
|
| 190 |
+
3. Review documentation files
|
| 191 |
+
4. Test with Demo Mode first
|
| 192 |
+
|
| 193 |
+
---
|
| 194 |
+
|
| 195 |
+
**Happy Video Generating! π¬β¨**
|
QUICKSTART_LOCAL.md
ADDED
|
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π Quick Start Guide - Local Video Generator
|
| 2 |
+
|
| 3 |
+
Get up and running in 3 simple steps!
|
| 4 |
+
|
| 5 |
+
## Step 1: Install PyTorch
|
| 6 |
+
|
| 7 |
+
Choose based on your hardware:
|
| 8 |
+
|
| 9 |
+
### Option A: GPU (NVIDIA) - Recommended
|
| 10 |
+
```bash
|
| 11 |
+
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
|
| 12 |
+
```
|
| 13 |
+
|
| 14 |
+
### Option B: CPU Only
|
| 15 |
+
```bash
|
| 16 |
+
pip install torch torchvision torchaudio
|
| 17 |
+
```
|
| 18 |
+
|
| 19 |
+
## Step 2: Run the Startup Script
|
| 20 |
+
|
| 21 |
+
### On macOS/Linux:
|
| 22 |
+
```bash
|
| 23 |
+
./start_local.sh
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
### On Windows:
|
| 27 |
+
```batch
|
| 28 |
+
start_local.bat
|
| 29 |
+
```
|
| 30 |
+
|
| 31 |
+
The script will:
|
| 32 |
+
- Create a virtual environment
|
| 33 |
+
- Install all dependencies
|
| 34 |
+
- Start the backend server
|
| 35 |
+
|
| 36 |
+
## Step 3: Open the Web Interface
|
| 37 |
+
|
| 38 |
+
1. Open `index_local.html` in your browser
|
| 39 |
+
2. Click **"π Initialize Model"** (first time only, takes 2-5 minutes)
|
| 40 |
+
3. Enter a prompt and click **"π¬ Generate Video"**
|
| 41 |
+
4. Wait 30-120 seconds (GPU) or 5-10 minutes (CPU)
|
| 42 |
+
5. Enjoy your AI-generated video! π
|
| 43 |
+
|
| 44 |
+
## β‘ Quick Tips
|
| 45 |
+
|
| 46 |
+
- **First run**: Model download (~5GB) happens automatically
|
| 47 |
+
- **GPU users**: 30-120 seconds per video
|
| 48 |
+
- **CPU users**: 5-10 minutes per video (be patient!)
|
| 49 |
+
- **Example prompts**: Click the example buttons in the UI
|
| 50 |
+
|
| 51 |
+
## π Need Help?
|
| 52 |
+
|
| 53 |
+
See `README_LOCAL.md` for detailed documentation and troubleshooting.
|
| 54 |
+
|
| 55 |
+
---
|
| 56 |
+
|
| 57 |
+
**That's it! Happy video generation! π¬β¨**
|
README.md
ADDED
|
@@ -0,0 +1,242 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π¬ AI Video Generator - Free Hailuo Clone
|
| 2 |
+
|
| 3 |
+
A free, open-source AI-powered video generation application that creates videos from text prompts using Hugging Face's Zeroscope model.
|
| 4 |
+
|
| 5 |
+
## β¨ Features
|
| 6 |
+
|
| 7 |
+
- π¨ **Text-to-Video Generation**: Create videos from simple text descriptions
|
| 8 |
+
- π― **Modern UI**: Beautiful, responsive interface with gradient design
|
| 9 |
+
- π₯ **Video Download**: Download generated videos directly to your device
|
| 10 |
+
- β‘ **Real-time Status**: Live feedback with loading indicators and status messages
|
| 11 |
+
- π **Input Validation**: Robust validation and error handling
|
| 12 |
+
- π **Health Monitoring**: Server health check endpoint
|
| 13 |
+
- π **Example Prompts**: Quick-start with pre-made prompt examples
|
| 14 |
+
- π **Character Counter**: Track your prompt length in real-time
|
| 15 |
+
|
| 16 |
+
## π Quick Start
|
| 17 |
+
|
| 18 |
+
### Prerequisites
|
| 19 |
+
|
| 20 |
+
- Python 3.8 or higher
|
| 21 |
+
- pip (Python package manager)
|
| 22 |
+
- Modern web browser (Chrome, Firefox, Safari, or Edge)
|
| 23 |
+
|
| 24 |
+
### Installation
|
| 25 |
+
|
| 26 |
+
1. **Clone or navigate to the project directory**:
|
| 27 |
+
```bash
|
| 28 |
+
cd /Users/sravyalu/VideoAI/hailuo-clone
|
| 29 |
+
```
|
| 30 |
+
|
| 31 |
+
2. **Create and activate a virtual environment** (recommended):
|
| 32 |
+
```bash
|
| 33 |
+
python3 -m venv venv
|
| 34 |
+
source venv/bin/activate # On Windows: venv\Scripts\activate
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
3. **Install dependencies**:
|
| 38 |
+
```bash
|
| 39 |
+
pip install -r requirements.txt
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
4. **Set up environment variables**:
|
| 43 |
+
```bash
|
| 44 |
+
cp .env.example .env
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
Edit `.env` file if you need to customize settings:
|
| 48 |
+
```env
|
| 49 |
+
HF_SPACE_URL=https://cerspense-zeroscope-v2-xl.hf.space/
|
| 50 |
+
FLASK_PORT=5000
|
| 51 |
+
FLASK_DEBUG=False
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
### Running the Application
|
| 55 |
+
|
| 56 |
+
1. **Start the backend server**:
|
| 57 |
+
```bash
|
| 58 |
+
python backend.py
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
You should see:
|
| 62 |
+
```
|
| 63 |
+
INFO - Starting Flask server on port 5000 (debug=False)
|
| 64 |
+
INFO - Successfully connected to Hugging Face Space
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
2. **Open the frontend**:
|
| 68 |
+
- Simply open `index.html` in your web browser
|
| 69 |
+
- Or use a local server:
|
| 70 |
+
```bash
|
| 71 |
+
python -m http.server 8000
|
| 72 |
+
```
|
| 73 |
+
Then visit: `http://localhost:8000`
|
| 74 |
+
|
| 75 |
+
3. **Generate your first video**:
|
| 76 |
+
- Enter a text prompt (e.g., "A dog running in a park")
|
| 77 |
+
- Click "Generate Video"
|
| 78 |
+
- Wait 10-60 seconds for the AI to create your video
|
| 79 |
+
- Watch and download your generated video!
|
| 80 |
+
|
| 81 |
+
## π Project Structure
|
| 82 |
+
|
| 83 |
+
```
|
| 84 |
+
hailuo-clone/
|
| 85 |
+
βββ backend.py # Flask backend server
|
| 86 |
+
βββ index.html # Frontend UI
|
| 87 |
+
βββ requirements.txt # Python dependencies
|
| 88 |
+
βββ .env.example # Environment variables template
|
| 89 |
+
βββ .gitignore # Git ignore rules
|
| 90 |
+
βββ README.md # This file
|
| 91 |
+
βββ app.log # Application logs (created at runtime)
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
## π οΈ Configuration
|
| 95 |
+
|
| 96 |
+
### Environment Variables
|
| 97 |
+
|
| 98 |
+
| Variable | Default | Description |
|
| 99 |
+
|----------|---------|-------------|
|
| 100 |
+
| `HF_SPACE_URL` | `https://cerspense-zeroscope-v2-xl.hf.space/` | Hugging Face Space URL |
|
| 101 |
+
| `FLASK_PORT` | `5000` | Backend server port |
|
| 102 |
+
| `FLASK_DEBUG` | `False` | Enable/disable debug mode |
|
| 103 |
+
|
| 104 |
+
### Prompt Limits
|
| 105 |
+
|
| 106 |
+
- **Minimum length**: 3 characters
|
| 107 |
+
- **Maximum length**: 500 characters
|
| 108 |
+
|
| 109 |
+
## π§ API Endpoints
|
| 110 |
+
|
| 111 |
+
### Health Check
|
| 112 |
+
```
|
| 113 |
+
GET /health
|
| 114 |
+
```
|
| 115 |
+
Returns server health status and client initialization state.
|
| 116 |
+
|
| 117 |
+
**Response**:
|
| 118 |
+
```json
|
| 119 |
+
{
|
| 120 |
+
"status": "healthy",
|
| 121 |
+
"timestamp": "2025-10-02T17:38:51-07:00",
|
| 122 |
+
"client_initialized": true
|
| 123 |
+
}
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
### Generate Video
|
| 127 |
+
```
|
| 128 |
+
POST /generate-video
|
| 129 |
+
Content-Type: application/json
|
| 130 |
+
```
|
| 131 |
+
|
| 132 |
+
**Request Body**:
|
| 133 |
+
```json
|
| 134 |
+
{
|
| 135 |
+
"prompt": "A dog running in a park"
|
| 136 |
+
}
|
| 137 |
+
```
|
| 138 |
+
|
| 139 |
+
**Success Response** (200):
|
| 140 |
+
```json
|
| 141 |
+
{
|
| 142 |
+
"video_url": "https://...",
|
| 143 |
+
"prompt": "A dog running in a park",
|
| 144 |
+
"timestamp": "2025-10-02T17:38:51-07:00"
|
| 145 |
+
}
|
| 146 |
+
```
|
| 147 |
+
|
| 148 |
+
**Error Response** (400/500/503/504):
|
| 149 |
+
```json
|
| 150 |
+
{
|
| 151 |
+
"error": "Error message here"
|
| 152 |
+
}
|
| 153 |
+
```
|
| 154 |
+
|
| 155 |
+
## π Troubleshooting
|
| 156 |
+
|
| 157 |
+
### Backend Issues
|
| 158 |
+
|
| 159 |
+
**Problem**: `ModuleNotFoundError: No module named 'flask'`
|
| 160 |
+
- **Solution**: Install dependencies: `pip install -r requirements.txt`
|
| 161 |
+
|
| 162 |
+
**Problem**: `Failed to initialize Gradio client`
|
| 163 |
+
- **Solution**: Check your internet connection and verify the Hugging Face Space URL is correct
|
| 164 |
+
|
| 165 |
+
**Problem**: `Port 5000 already in use`
|
| 166 |
+
- **Solution**: Change the port in `.env` file or stop the process using port 5000
|
| 167 |
+
|
| 168 |
+
### Frontend Issues
|
| 169 |
+
|
| 170 |
+
**Problem**: "Cannot connect to server" error
|
| 171 |
+
- **Solution**: Ensure the backend is running on port 5000
|
| 172 |
+
|
| 173 |
+
**Problem**: Video doesn't play
|
| 174 |
+
- **Solution**: Check browser console for errors. Some browsers block autoplay.
|
| 175 |
+
|
| 176 |
+
**Problem**: CORS errors
|
| 177 |
+
- **Solution**: The backend has CORS enabled. Make sure you're using the correct URL.
|
| 178 |
+
|
| 179 |
+
## π Logging
|
| 180 |
+
|
| 181 |
+
Application logs are saved to `app.log` in the project directory. Logs include:
|
| 182 |
+
- Server startup/shutdown events
|
| 183 |
+
- Video generation requests
|
| 184 |
+
- Errors and exceptions
|
| 185 |
+
- Client connection status
|
| 186 |
+
|
| 187 |
+
View logs in real-time:
|
| 188 |
+
```bash
|
| 189 |
+
tail -f app.log
|
| 190 |
+
```
|
| 191 |
+
|
| 192 |
+
## π Security Notes
|
| 193 |
+
|
| 194 |
+
- Never commit `.env` file to version control
|
| 195 |
+
- Don't run in debug mode in production (`FLASK_DEBUG=False`)
|
| 196 |
+
- Consider adding rate limiting for production use
|
| 197 |
+
- Validate and sanitize all user inputs (already implemented)
|
| 198 |
+
|
| 199 |
+
## π Production Deployment
|
| 200 |
+
|
| 201 |
+
For production deployment, consider:
|
| 202 |
+
|
| 203 |
+
1. **Use a production WSGI server** (e.g., Gunicorn):
|
| 204 |
+
```bash
|
| 205 |
+
pip install gunicorn
|
| 206 |
+
gunicorn -w 4 -b 0.0.0.0:5000 backend:app
|
| 207 |
+
```
|
| 208 |
+
|
| 209 |
+
2. **Set up a reverse proxy** (e.g., Nginx)
|
| 210 |
+
|
| 211 |
+
3. **Enable HTTPS** with SSL certificates
|
| 212 |
+
|
| 213 |
+
4. **Add rate limiting** to prevent abuse
|
| 214 |
+
|
| 215 |
+
5. **Set up monitoring** and alerting
|
| 216 |
+
|
| 217 |
+
6. **Use environment variables** for sensitive configuration
|
| 218 |
+
|
| 219 |
+
## π€ Contributing
|
| 220 |
+
|
| 221 |
+
Contributions are welcome! Please feel free to submit issues or pull requests.
|
| 222 |
+
|
| 223 |
+
## π License
|
| 224 |
+
|
| 225 |
+
This project is open source and available under the MIT License.
|
| 226 |
+
|
| 227 |
+
## π Acknowledgments
|
| 228 |
+
|
| 229 |
+
- Built with [Flask](https://flask.palletsprojects.com/)
|
| 230 |
+
- Uses [Gradio Client](https://www.gradio.app/) for Hugging Face integration
|
| 231 |
+
- Video generation powered by [Zeroscope](https://huggingface.co/cerspense/zeroscope_v2_XL)
|
| 232 |
+
|
| 233 |
+
## π Support
|
| 234 |
+
|
| 235 |
+
If you encounter any issues or have questions:
|
| 236 |
+
1. Check the troubleshooting section above
|
| 237 |
+
2. Review the logs in `app.log`
|
| 238 |
+
3. Open an issue on the project repository
|
| 239 |
+
|
| 240 |
+
---
|
| 241 |
+
|
| 242 |
+
**Made with β€οΈ for the AI community**
|
README_ENHANCED.md
ADDED
|
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π¬ AI Video Generator Pro - Hailuo-Inspired Edition
|
| 2 |
+
|
| 3 |
+
A powerful, feature-rich AI video generation application inspired by Hailuo AI, supporting multiple state-of-the-art models from Hugging Face.
|
| 4 |
+
|
| 5 |
+
## β¨ Key Features
|
| 6 |
+
|
| 7 |
+
### π― Multiple Generation Modes
|
| 8 |
+
- **Text-to-Video**: Create videos from text descriptions
|
| 9 |
+
- **Image-to-Video**: Animate static images with AI
|
| 10 |
+
|
| 11 |
+
### π€ Multiple AI Models
|
| 12 |
+
- **CogVideoX-5B**: High-quality 6-second videos at 720p
|
| 13 |
+
- **LTX Video**: Fast and efficient generation
|
| 14 |
+
- **Stable Video Diffusion**: Professional image animation
|
| 15 |
+
- **AnimateDiff**: Advanced motion control
|
| 16 |
+
- **Zeroscope V2 XL**: Fast and reliable baseline
|
| 17 |
+
|
| 18 |
+
### π₯ Hailuo-Inspired Features
|
| 19 |
+
- **Camera Movements**: Zoom, pan, tilt, tracking shots, dolly, crane, shake
|
| 20 |
+
- **Visual Effects**: Cinematic lighting, fog, rain, slow motion
|
| 21 |
+
- **Video Styles**: Realistic, anime, cartoon, 3D render, vintage, sci-fi, fantasy
|
| 22 |
+
- **Enhanced Prompts**: Automatic prompt enhancement with cinematic tags
|
| 23 |
+
|
| 24 |
+
## π Quick Start
|
| 25 |
+
|
| 26 |
+
### Installation
|
| 27 |
+
|
| 28 |
+
```bash
|
| 29 |
+
cd /Users/sravyalu/VideoAI/hailuo-clone
|
| 30 |
+
pip install -r requirements.txt
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
### Running
|
| 34 |
+
|
| 35 |
+
**Enhanced version (recommended)**:
|
| 36 |
+
```bash
|
| 37 |
+
python backend_enhanced.py
|
| 38 |
+
```
|
| 39 |
+
Then open `index_enhanced.html` in your browser.
|
| 40 |
+
|
| 41 |
+
**Basic version**:
|
| 42 |
+
```bash
|
| 43 |
+
python backend.py
|
| 44 |
+
```
|
| 45 |
+
Then open `index.html` in your browser.
|
| 46 |
+
|
| 47 |
+
## π Usage Guide
|
| 48 |
+
|
| 49 |
+
### Text-to-Video
|
| 50 |
+
1. Select "Text to Video" mode
|
| 51 |
+
2. Choose an AI model
|
| 52 |
+
3. Enter your prompt (3-1000 characters)
|
| 53 |
+
4. Add camera movements, effects, or styles (optional)
|
| 54 |
+
5. Click "Generate Video"
|
| 55 |
+
6. Wait 30-120 seconds
|
| 56 |
+
7. Download or share
|
| 57 |
+
|
| 58 |
+
### Image-to-Video
|
| 59 |
+
1. Select "Image to Video" mode
|
| 60 |
+
2. Upload an image
|
| 61 |
+
3. Add animation prompt (optional)
|
| 62 |
+
4. Select image-to-video model
|
| 63 |
+
5. Generate
|
| 64 |
+
|
| 65 |
+
## π― Example Prompts
|
| 66 |
+
|
| 67 |
+
**Nature**: "A majestic waterfall cascading down mossy rocks in a lush rainforest"
|
| 68 |
+
|
| 69 |
+
**Action**: "A sports car drifting around a corner" + Camera: [Tracking shot]
|
| 70 |
+
|
| 71 |
+
**Fantasy**: "A dragon flying over a medieval castle at dawn" + Style: fantasy, magical
|
| 72 |
+
|
| 73 |
+
## π οΈ Available Models
|
| 74 |
+
|
| 75 |
+
| Model | Type | Resolution | Best For |
|
| 76 |
+
|-------|------|------------|----------|
|
| 77 |
+
| CogVideoX-5B | T2V | 720x480 | High quality |
|
| 78 |
+
| LTX Video | T2V | 704x480 | Fast generation |
|
| 79 |
+
| Stable Video Diffusion | I2V | 576x576 | Image animation |
|
| 80 |
+
| AnimateDiff | I2V | 512x512 | Motion control |
|
| 81 |
+
| Zeroscope | T2V | 512x320 | Quick tests |
|
| 82 |
+
|
| 83 |
+
## π Project Files
|
| 84 |
+
|
| 85 |
+
- `backend_enhanced.py` - Enhanced backend with multiple models
|
| 86 |
+
- `index_enhanced.html` - Full-featured frontend
|
| 87 |
+
- `models_config.py` - Model configurations
|
| 88 |
+
- `requirements.txt` - Dependencies
|
| 89 |
+
|
| 90 |
+
## π§ API Endpoints
|
| 91 |
+
|
| 92 |
+
**GET /health** - Server health check
|
| 93 |
+
**GET /models** - List available models and options
|
| 94 |
+
**POST /generate-video** - Text-to-video generation
|
| 95 |
+
**POST /generate-video-from-image** - Image-to-video generation
|
| 96 |
+
|
| 97 |
+
## π‘ Tips
|
| 98 |
+
|
| 99 |
+
- Start with Zeroscope for quick tests
|
| 100 |
+
- Use CogVideoX-5B for final high-quality output
|
| 101 |
+
- Combine camera movements with visual effects
|
| 102 |
+
- Keep prompts descriptive and specific
|
| 103 |
+
- Image-to-video works best with clear, well-lit images
|
| 104 |
+
|
| 105 |
+
## π Troubleshooting
|
| 106 |
+
|
| 107 |
+
**Connection errors**: Check internet and Hugging Face availability
|
| 108 |
+
**Timeouts**: Service may be busy, try again or use faster model
|
| 109 |
+
**Slow generation**: Normal for high-quality models (30-120s)
|
| 110 |
+
|
| 111 |
+
## π What's New vs Basic Version
|
| 112 |
+
|
| 113 |
+
β
Multiple AI models (5 models vs 1)
|
| 114 |
+
β
Image-to-video capability
|
| 115 |
+
β
Camera movement controls (Hailuo-style)
|
| 116 |
+
β
Visual effects and styles
|
| 117 |
+
β
Enhanced prompt building
|
| 118 |
+
β
Better UI with dual panels
|
| 119 |
+
β
Categorized example prompts
|
| 120 |
+
β
Model information display
|
| 121 |
+
|
| 122 |
+
---
|
| 123 |
+
|
| 124 |
+
**Made with β€οΈ inspired by Hailuo AI**
|
README_GITHUB.md
ADDED
|
@@ -0,0 +1,294 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π¬ AI Video Generator - Hailuo Clone
|
| 2 |
+
|
| 3 |
+
A powerful AI video generation platform with multiple backend options. Generate stunning videos from text prompts using state-of-the-art AI models!
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+

|
| 7 |
+

|
| 8 |
+
|
| 9 |
+
## β¨ Features
|
| 10 |
+
|
| 11 |
+
- π₯ **Multiple AI Models** - CogVideoX, Hailuo Video-01, HunyuanVideo
|
| 12 |
+
- π **3 Backend Options** - Replicate API, Local Generation, or HF Spaces
|
| 13 |
+
- π¨ **Advanced Controls** - Camera movements, visual effects, styles (Hailuo-inspired)
|
| 14 |
+
- π» **Beautiful UI** - Modern, responsive web interface
|
| 15 |
+
- π **Privacy Options** - Run completely locally or use cloud APIs
|
| 16 |
+
- β‘ **Fast Generation** - 30-60 seconds with Replicate, or run locally
|
| 17 |
+
|
| 18 |
+
## π― Quick Start (3 Options)
|
| 19 |
+
|
| 20 |
+
### Option 1: Replicate API (Recommended - Most Reliable)
|
| 21 |
+
|
| 22 |
+
**Best for:** Fast, reliable video generation with minimal setup
|
| 23 |
+
|
| 24 |
+
1. **Get Replicate API Token**
|
| 25 |
+
```bash
|
| 26 |
+
# Sign up at https://replicate.com
|
| 27 |
+
# Get token from https://replicate.com/account/api-tokens
|
| 28 |
+
```
|
| 29 |
+
|
| 30 |
+
2. **Setup**
|
| 31 |
+
```bash
|
| 32 |
+
# Clone the repo
|
| 33 |
+
git clone <your-repo-url>
|
| 34 |
+
cd hailuo-clone
|
| 35 |
+
|
| 36 |
+
# Create .env file
|
| 37 |
+
echo "REPLICATE_API_TOKEN=your_token_here" > .env
|
| 38 |
+
|
| 39 |
+
# Install dependencies
|
| 40 |
+
pip install -r requirements.txt
|
| 41 |
+
pip install replicate
|
| 42 |
+
|
| 43 |
+
# Run backend
|
| 44 |
+
python backend_replicate.py
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
3. **Open UI**
|
| 48 |
+
- Open `index.html` in your browser
|
| 49 |
+
- Enter a prompt and generate!
|
| 50 |
+
|
| 51 |
+
**Cost:** ~$0.05-0.10 per video
|
| 52 |
+
|
| 53 |
+
---
|
| 54 |
+
|
| 55 |
+
### Option 2: Local Generation (Free & Private)
|
| 56 |
+
|
| 57 |
+
**Best for:** Complete privacy, offline use, no API costs
|
| 58 |
+
|
| 59 |
+
1. **Install PyTorch**
|
| 60 |
+
```bash
|
| 61 |
+
# For GPU (NVIDIA)
|
| 62 |
+
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
|
| 63 |
+
|
| 64 |
+
# For CPU only
|
| 65 |
+
pip install torch torchvision torchaudio
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
2. **Install Dependencies**
|
| 69 |
+
```bash
|
| 70 |
+
pip install -r requirements_local.txt
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
3. **Run**
|
| 74 |
+
```bash
|
| 75 |
+
# Start backend
|
| 76 |
+
python backend_local.py
|
| 77 |
+
|
| 78 |
+
# Open index_local.html in browser
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
**Requirements:**
|
| 82 |
+
- GPU: RTX 3060+ (30-120s per video)
|
| 83 |
+
- CPU: 16GB RAM (5-10 min per video)
|
| 84 |
+
- Storage: 10GB for model
|
| 85 |
+
|
| 86 |
+
---
|
| 87 |
+
|
| 88 |
+
### Option 3: Hugging Face Spaces (Free but Unreliable)
|
| 89 |
+
|
| 90 |
+
**Best for:** Testing, no setup required
|
| 91 |
+
|
| 92 |
+
1. **Install Dependencies**
|
| 93 |
+
```bash
|
| 94 |
+
pip install -r requirements.txt
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
2. **Run**
|
| 98 |
+
```bash
|
| 99 |
+
python backend_enhanced.py
|
| 100 |
+
# Open index_enhanced.html
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
**Note:** HF Spaces may be sleeping/overloaded. Use Demo Mode for testing.
|
| 104 |
+
|
| 105 |
+
---
|
| 106 |
+
|
| 107 |
+
## π Project Structure
|
| 108 |
+
|
| 109 |
+
```
|
| 110 |
+
hailuo-clone/
|
| 111 |
+
βββ Backend Options
|
| 112 |
+
β βββ backend_replicate.py # Replicate API (recommended)
|
| 113 |
+
β βββ backend_local.py # Local generation
|
| 114 |
+
β βββ backend_enhanced.py # HF Spaces (multiple models)
|
| 115 |
+
β βββ backend_simple.py # Demo mode
|
| 116 |
+
β
|
| 117 |
+
βββ Frontend
|
| 118 |
+
β βββ index.html # Simple UI (works with all backends)
|
| 119 |
+
β βββ index_enhanced.html # Advanced UI with camera controls
|
| 120 |
+
β βββ index_local.html # Local generation UI
|
| 121 |
+
β
|
| 122 |
+
βββ Configuration
|
| 123 |
+
β βββ models_config.py # Model configurations
|
| 124 |
+
β βββ requirements.txt # Basic dependencies
|
| 125 |
+
β βββ requirements_local.txt # Local generation dependencies
|
| 126 |
+
β βββ .env.example # Environment variables template
|
| 127 |
+
β
|
| 128 |
+
βββ Documentation
|
| 129 |
+
βββ README.md # This file
|
| 130 |
+
βββ SOLUTION_GUIDE.md # Troubleshooting guide
|
| 131 |
+
βββ README_LOCAL.md # Local setup guide
|
| 132 |
+
```
|
| 133 |
+
|
| 134 |
+
## π¨ Available Models
|
| 135 |
+
|
| 136 |
+
### Replicate API
|
| 137 |
+
- **Hailuo Video-01** (MiniMax) - The real Hailuo model! π₯
|
| 138 |
+
- **CogVideoX-5B** - High quality text-to-video
|
| 139 |
+
|
| 140 |
+
### Local Generation
|
| 141 |
+
- **CogVideoX-2B** - Runs on your computer
|
| 142 |
+
|
| 143 |
+
### Hugging Face Spaces
|
| 144 |
+
- **CogVideoX-5B** - High quality (when available)
|
| 145 |
+
- **CogVideoX-2B** - Faster version
|
| 146 |
+
- **HunyuanVideo** - Tencent's SOTA model
|
| 147 |
+
- **Stable Video Diffusion** - Image-to-video
|
| 148 |
+
|
| 149 |
+
## π¬ Usage Examples
|
| 150 |
+
|
| 151 |
+
### Basic Text-to-Video
|
| 152 |
+
```python
|
| 153 |
+
# Using any backend on localhost:5000
|
| 154 |
+
import requests
|
| 155 |
+
|
| 156 |
+
response = requests.post('http://localhost:5000/generate-video', json={
|
| 157 |
+
'prompt': 'A golden retriever running through a field of flowers at sunset'
|
| 158 |
+
})
|
| 159 |
+
|
| 160 |
+
video_url = response.json()['video_url']
|
| 161 |
+
```
|
| 162 |
+
|
| 163 |
+
### With Replicate (Specific Model)
|
| 164 |
+
```python
|
| 165 |
+
response = requests.post('http://localhost:5000/generate-video', json={
|
| 166 |
+
'prompt': 'A cat playing with yarn',
|
| 167 |
+
'model': 'hailuo' # or 'cogvideox'
|
| 168 |
+
})
|
| 169 |
+
```
|
| 170 |
+
|
| 171 |
+
### Advanced (Camera Controls)
|
| 172 |
+
```python
|
| 173 |
+
response = requests.post('http://localhost:5000/generate-video', json={
|
| 174 |
+
'prompt': 'Ocean waves at sunset',
|
| 175 |
+
'camera_movement': '[Zoom in]',
|
| 176 |
+
'visual_effect': 'cinematic lighting, film grain',
|
| 177 |
+
'style': 'photorealistic, 4k, high detail'
|
| 178 |
+
})
|
| 179 |
+
```
|
| 180 |
+
|
| 181 |
+
## π§ Configuration
|
| 182 |
+
|
| 183 |
+
### Environment Variables (.env)
|
| 184 |
+
```bash
|
| 185 |
+
# Replicate API
|
| 186 |
+
REPLICATE_API_TOKEN=your_token_here
|
| 187 |
+
|
| 188 |
+
# Flask Configuration
|
| 189 |
+
FLASK_PORT=5000
|
| 190 |
+
FLASK_DEBUG=False
|
| 191 |
+
|
| 192 |
+
# Model Selection
|
| 193 |
+
DEFAULT_MODEL=cogvideox-5b
|
| 194 |
+
```
|
| 195 |
+
|
| 196 |
+
### Model Configuration (models_config.py)
|
| 197 |
+
- Camera movements (zoom, pan, tilt, etc.)
|
| 198 |
+
- Visual effects (cinematic, dramatic, slow-motion)
|
| 199 |
+
- Video styles (realistic, anime, 3D render)
|
| 200 |
+
- Example prompts by category
|
| 201 |
+
|
| 202 |
+
## π Performance Comparison
|
| 203 |
+
|
| 204 |
+
| Backend | Setup Time | Speed | Quality | Cost | Reliability |
|
| 205 |
+
|---------|-----------|-------|---------|------|-------------|
|
| 206 |
+
| **Replicate** | 5 min | 30-60s | βββββ | $0.05-0.10 | βββββ |
|
| 207 |
+
| **Local (GPU)** | 30 min | 30-120s | ββββ | Free | βββββ |
|
| 208 |
+
| **Local (CPU)** | 30 min | 5-10 min | ββββ | Free | βββββ |
|
| 209 |
+
| **HF Spaces** | Instant | 30-60s | ββββ | Free | ββ |
|
| 210 |
+
|
| 211 |
+
## π Deployment
|
| 212 |
+
|
| 213 |
+
### Local Development
|
| 214 |
+
```bash
|
| 215 |
+
python backend_replicate.py
|
| 216 |
+
# Open http://localhost:5000
|
| 217 |
+
```
|
| 218 |
+
|
| 219 |
+
### Production (with Gunicorn)
|
| 220 |
+
```bash
|
| 221 |
+
pip install gunicorn
|
| 222 |
+
gunicorn -w 4 -b 0.0.0.0:5000 backend_replicate:app
|
| 223 |
+
```
|
| 224 |
+
|
| 225 |
+
### Docker (Optional)
|
| 226 |
+
```dockerfile
|
| 227 |
+
FROM python:3.9-slim
|
| 228 |
+
WORKDIR /app
|
| 229 |
+
COPY requirements.txt .
|
| 230 |
+
RUN pip install -r requirements.txt
|
| 231 |
+
COPY . .
|
| 232 |
+
CMD ["python", "backend_replicate.py"]
|
| 233 |
+
```
|
| 234 |
+
|
| 235 |
+
## π Troubleshooting
|
| 236 |
+
|
| 237 |
+
### "Model provider unreachable"
|
| 238 |
+
- **Solution:** Use Replicate API (`backend_replicate.py`) instead of HF Spaces
|
| 239 |
+
- HF Spaces are often sleeping/overloaded
|
| 240 |
+
|
| 241 |
+
### "Out of memory" (Local)
|
| 242 |
+
- **Solution:** Use CPU mode or reduce batch size
|
| 243 |
+
- Close other GPU applications
|
| 244 |
+
|
| 245 |
+
### "Too slow" (Local CPU)
|
| 246 |
+
- **Expected:** CPU generation takes 5-10 minutes
|
| 247 |
+
- **Solution:** Use Replicate API or get a GPU
|
| 248 |
+
|
| 249 |
+
### Port 5000 already in use
|
| 250 |
+
- **Solution:** Kill the process or change port in backend file
|
| 251 |
+
|
| 252 |
+
See [SOLUTION_GUIDE.md](SOLUTION_GUIDE.md) for detailed troubleshooting.
|
| 253 |
+
|
| 254 |
+
## π Documentation
|
| 255 |
+
|
| 256 |
+
- [SOLUTION_GUIDE.md](SOLUTION_GUIDE.md) - Complete troubleshooting guide
|
| 257 |
+
- [README_LOCAL.md](README_LOCAL.md) - Local generation setup
|
| 258 |
+
- [REPLICATE_SETUP.md](REPLICATE_SETUP.md) - Replicate API setup
|
| 259 |
+
- [QUICKSTART_LOCAL.md](QUICKSTART_LOCAL.md) - Quick local setup
|
| 260 |
+
|
| 261 |
+
## π€ Contributing
|
| 262 |
+
|
| 263 |
+
Contributions are welcome! Please:
|
| 264 |
+
|
| 265 |
+
1. Fork the repository
|
| 266 |
+
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
|
| 267 |
+
3. Commit your changes (`git commit -m 'Add amazing feature'`)
|
| 268 |
+
4. Push to the branch (`git push origin feature/amazing-feature`)
|
| 269 |
+
5. Open a Pull Request
|
| 270 |
+
|
| 271 |
+
## π License
|
| 272 |
+
|
| 273 |
+
This project is licensed under the MIT License - see the LICENSE file for details.
|
| 274 |
+
|
| 275 |
+
## π Acknowledgments
|
| 276 |
+
|
| 277 |
+
- [CogVideoX](https://github.com/THUDM/CogVideo) by Tsinghua University
|
| 278 |
+
- [Hailuo Video-01](https://replicate.com/minimax/video-01) by MiniMax
|
| 279 |
+
- [Replicate](https://replicate.com) for API infrastructure
|
| 280 |
+
- [Hugging Face](https://huggingface.co) for model hosting
|
| 281 |
+
|
| 282 |
+
## π Star History
|
| 283 |
+
|
| 284 |
+
If you find this project useful, please consider giving it a star! β
|
| 285 |
+
|
| 286 |
+
## π Support
|
| 287 |
+
|
| 288 |
+
- **Issues:** [GitHub Issues](your-repo-url/issues)
|
| 289 |
+
- **Discussions:** [GitHub Discussions](your-repo-url/discussions)
|
| 290 |
+
- **Email:** your-email@example.com
|
| 291 |
+
|
| 292 |
+
---
|
| 293 |
+
|
| 294 |
+
**Made with β€οΈ for the AI video generation community**
|
README_LOCAL.md
ADDED
|
@@ -0,0 +1,221 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π¬ Local AI Video Generator
|
| 2 |
+
|
| 3 |
+
Generate AI videos **completely locally** on your computer using CogVideoX-2B model!
|
| 4 |
+
|
| 5 |
+
## π Features
|
| 6 |
+
|
| 7 |
+
- β
**100% Local** - No API keys, no cloud services, runs on your computer
|
| 8 |
+
- π **CogVideoX-2B** - State-of-the-art text-to-video model by Tsinghua University
|
| 9 |
+
- π₯ **6-second videos** - Generate 49 frames at 8 fps (720p quality)
|
| 10 |
+
- π» **GPU or CPU** - Works on both (GPU recommended for speed)
|
| 11 |
+
- π¨ **Simple UI** - Clean web interface for easy video generation
|
| 12 |
+
|
| 13 |
+
## π Requirements
|
| 14 |
+
|
| 15 |
+
### Hardware Requirements
|
| 16 |
+
|
| 17 |
+
**Minimum (CPU):**
|
| 18 |
+
- 16GB RAM
|
| 19 |
+
- 10GB free disk space
|
| 20 |
+
- Generation time: 5-10 minutes per video
|
| 21 |
+
|
| 22 |
+
**Recommended (GPU):**
|
| 23 |
+
- NVIDIA GPU with 8GB+ VRAM (RTX 3060 or better)
|
| 24 |
+
- 16GB RAM
|
| 25 |
+
- 10GB free disk space
|
| 26 |
+
- Generation time: 30-120 seconds per video
|
| 27 |
+
|
| 28 |
+
### Software Requirements
|
| 29 |
+
|
| 30 |
+
- Python 3.9 or higher
|
| 31 |
+
- CUDA 11.8+ (for GPU acceleration)
|
| 32 |
+
|
| 33 |
+
## π Quick Start
|
| 34 |
+
|
| 35 |
+
### 1. Install Dependencies
|
| 36 |
+
|
| 37 |
+
```bash
|
| 38 |
+
# Install PyTorch with CUDA support (for GPU)
|
| 39 |
+
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
|
| 40 |
+
|
| 41 |
+
# Or install PyTorch for CPU only
|
| 42 |
+
pip install torch torchvision torchaudio
|
| 43 |
+
|
| 44 |
+
# Install other requirements
|
| 45 |
+
pip install -r requirements_local.txt
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
### 2. Run the Backend
|
| 49 |
+
|
| 50 |
+
```bash
|
| 51 |
+
python backend_local.py
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
The server will start on `http://localhost:5000`
|
| 55 |
+
|
| 56 |
+
**First Run Notes:**
|
| 57 |
+
- The model (~5GB) will be downloaded automatically
|
| 58 |
+
- This happens only once
|
| 59 |
+
- Subsequent runs will be much faster
|
| 60 |
+
|
| 61 |
+
### 3. Open the Web Interface
|
| 62 |
+
|
| 63 |
+
Open `index_local.html` in your browser:
|
| 64 |
+
|
| 65 |
+
```bash
|
| 66 |
+
# On macOS
|
| 67 |
+
open index_local.html
|
| 68 |
+
|
| 69 |
+
# On Linux
|
| 70 |
+
xdg-open index_local.html
|
| 71 |
+
|
| 72 |
+
# On Windows
|
| 73 |
+
start index_local.html
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
Or manually open: `http://localhost:5000` and navigate to the HTML file
|
| 77 |
+
|
| 78 |
+
### 4. Initialize the Model
|
| 79 |
+
|
| 80 |
+
1. Click the **"π Initialize Model"** button in the UI
|
| 81 |
+
2. Wait 2-5 minutes for the model to load
|
| 82 |
+
3. Once loaded, you can start generating videos!
|
| 83 |
+
|
| 84 |
+
### 5. Generate Videos
|
| 85 |
+
|
| 86 |
+
1. Enter a descriptive prompt (e.g., "A cat playing with a ball of yarn")
|
| 87 |
+
2. Click **"π¬ Generate Video"**
|
| 88 |
+
3. Wait 30-120 seconds (GPU) or 5-10 minutes (CPU)
|
| 89 |
+
4. Download or share your video!
|
| 90 |
+
|
| 91 |
+
## π Example Prompts
|
| 92 |
+
|
| 93 |
+
- "A golden retriever running through a field of flowers at sunset"
|
| 94 |
+
- "Ocean waves crashing on a beach, aerial view"
|
| 95 |
+
- "A bird flying through clouds, slow motion"
|
| 96 |
+
- "City street with cars at night, neon lights"
|
| 97 |
+
- "Flowers blooming in a garden, time-lapse"
|
| 98 |
+
|
| 99 |
+
## π― Tips for Best Results
|
| 100 |
+
|
| 101 |
+
1. **Be Descriptive** - Include details about lighting, camera angle, movement
|
| 102 |
+
2. **Keep it Simple** - Focus on one main subject or action
|
| 103 |
+
3. **Use Cinematic Terms** - "aerial view", "close-up", "slow motion", etc.
|
| 104 |
+
4. **GPU Recommended** - Much faster generation (30-120s vs 5-10min)
|
| 105 |
+
5. **First Generation** - May take longer as model initializes
|
| 106 |
+
|
| 107 |
+
## π§ Troubleshooting
|
| 108 |
+
|
| 109 |
+
### Model Not Loading
|
| 110 |
+
- **Issue**: Model fails to download or load
|
| 111 |
+
- **Solution**: Check internet connection, ensure 10GB free disk space
|
| 112 |
+
|
| 113 |
+
### Out of Memory (GPU)
|
| 114 |
+
- **Issue**: CUDA out of memory error
|
| 115 |
+
- **Solution**: Close other GPU applications, or use CPU mode
|
| 116 |
+
|
| 117 |
+
### Slow Generation (CPU)
|
| 118 |
+
- **Issue**: Takes 5-10 minutes per video
|
| 119 |
+
- **Solution**: This is normal for CPU. Consider using a GPU for faster generation
|
| 120 |
+
|
| 121 |
+
### Server Won't Start
|
| 122 |
+
- **Issue**: Port 5000 already in use
|
| 123 |
+
- **Solution**: Change port in `backend_local.py` (line 33): `FLASK_PORT = 5001`
|
| 124 |
+
|
| 125 |
+
### Video Quality Issues
|
| 126 |
+
- **Issue**: Video looks blurry or low quality
|
| 127 |
+
- **Solution**: This is expected for the 2B model. For better quality, upgrade to CogVideoX-5B (requires more VRAM)
|
| 128 |
+
|
| 129 |
+
## π Performance Benchmarks
|
| 130 |
+
|
| 131 |
+
| Hardware | Model Load Time | Generation Time | Quality |
|
| 132 |
+
|----------|----------------|-----------------|---------|
|
| 133 |
+
| RTX 4090 | 1-2 min | 30-45 sec | Excellent |
|
| 134 |
+
| RTX 3060 | 2-3 min | 60-90 sec | Good |
|
| 135 |
+
| CPU (16GB) | 3-5 min | 5-10 min | Good |
|
| 136 |
+
|
| 137 |
+
## π Model Information
|
| 138 |
+
|
| 139 |
+
- **Model**: CogVideoX-2B
|
| 140 |
+
- **Developer**: Tsinghua University (THUDM)
|
| 141 |
+
- **License**: Apache 2.0
|
| 142 |
+
- **Size**: ~5GB
|
| 143 |
+
- **Output**: 49 frames, 720p, 8 fps (~6 seconds)
|
| 144 |
+
|
| 145 |
+
## π File Structure
|
| 146 |
+
|
| 147 |
+
```
|
| 148 |
+
hailuo-clone/
|
| 149 |
+
βββ backend_local.py # Local backend server
|
| 150 |
+
βββ index_local.html # Web interface for local backend
|
| 151 |
+
βββ requirements_local.txt # Python dependencies
|
| 152 |
+
βββ README_LOCAL.md # This file
|
| 153 |
+
βββ generated_videos/ # Output directory (auto-created)
|
| 154 |
+
```
|
| 155 |
+
|
| 156 |
+
## π Comparison with Cloud Backends
|
| 157 |
+
|
| 158 |
+
| Feature | Local (backend_local.py) | Cloud (backend_enhanced.py) |
|
| 159 |
+
|---------|-------------------------|----------------------------|
|
| 160 |
+
| Setup | Complex (install PyTorch, download model) | Simple (just API keys) |
|
| 161 |
+
| Cost | Free (one-time setup) | Pay per generation |
|
| 162 |
+
| Speed | 30-120s (GPU) or 5-10min (CPU) | 30-60s |
|
| 163 |
+
| Privacy | 100% private | Data sent to cloud |
|
| 164 |
+
| Quality | Good (2B model) | Excellent (5B+ models) |
|
| 165 |
+
| Internet | Only for first download | Required for every generation |
|
| 166 |
+
|
| 167 |
+
## π οΈ Advanced Configuration
|
| 168 |
+
|
| 169 |
+
### Change Model
|
| 170 |
+
|
| 171 |
+
Edit `backend_local.py` line 54-56 to use a different model:
|
| 172 |
+
|
| 173 |
+
```python
|
| 174 |
+
# For better quality (requires 16GB+ VRAM)
|
| 175 |
+
pipeline = CogVideoXPipeline.from_pretrained(
|
| 176 |
+
"THUDM/CogVideoX-5b",
|
| 177 |
+
torch_dtype=torch.float16
|
| 178 |
+
)
|
| 179 |
+
```
|
| 180 |
+
|
| 181 |
+
### Adjust Generation Parameters
|
| 182 |
+
|
| 183 |
+
Edit `backend_local.py` lines 126-132:
|
| 184 |
+
|
| 185 |
+
```python
|
| 186 |
+
num_frames = 49 # More frames = longer video
|
| 187 |
+
guidance_scale = 6.0 # Higher = more prompt adherence
|
| 188 |
+
num_inference_steps = 50 # More steps = better quality (slower)
|
| 189 |
+
```
|
| 190 |
+
|
| 191 |
+
### Pre-load Model on Startup
|
| 192 |
+
|
| 193 |
+
Uncomment lines 232-233 in `backend_local.py`:
|
| 194 |
+
|
| 195 |
+
```python
|
| 196 |
+
logger.info("Pre-loading model...")
|
| 197 |
+
initialize_model()
|
| 198 |
+
```
|
| 199 |
+
|
| 200 |
+
## π Resources
|
| 201 |
+
|
| 202 |
+
- [CogVideoX GitHub](https://github.com/THUDM/CogVideo)
|
| 203 |
+
- [Diffusers Documentation](https://huggingface.co/docs/diffusers)
|
| 204 |
+
- [PyTorch Installation](https://pytorch.org/get-started/locally/)
|
| 205 |
+
|
| 206 |
+
## π€ Support
|
| 207 |
+
|
| 208 |
+
If you encounter issues:
|
| 209 |
+
|
| 210 |
+
1. Check the console logs in the terminal
|
| 211 |
+
2. Check browser console (F12) for errors
|
| 212 |
+
3. Ensure all dependencies are installed correctly
|
| 213 |
+
4. Verify GPU drivers are up to date (for GPU mode)
|
| 214 |
+
|
| 215 |
+
## π License
|
| 216 |
+
|
| 217 |
+
This project uses CogVideoX-2B which is licensed under Apache 2.0.
|
| 218 |
+
|
| 219 |
+
---
|
| 220 |
+
|
| 221 |
+
**Happy Video Generation! π¬β¨**
|
REPLICATE_SETUP.md
ADDED
|
@@ -0,0 +1,211 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π Using Replicate API for Real Hailuo (MiniMax) Video Generation
|
| 2 |
+
|
| 3 |
+
If Hugging Face Spaces are not working, you can use **Replicate API** which has the actual **Hailuo video-01 model** by MiniMax!
|
| 4 |
+
|
| 5 |
+
## π Why Replicate?
|
| 6 |
+
|
| 7 |
+
- β
**Real Hailuo Model**: The actual MiniMax video-01 model
|
| 8 |
+
- β
**Reliable**: Always available, no sleeping spaces
|
| 9 |
+
- β
**Fast**: Optimized infrastructure
|
| 10 |
+
- β
**Free Tier**: $5 free credits to start
|
| 11 |
+
|
| 12 |
+
## π Setup Instructions
|
| 13 |
+
|
| 14 |
+
### 1. Get Replicate API Token
|
| 15 |
+
|
| 16 |
+
1. Go to https://replicate.com
|
| 17 |
+
2. Sign up for a free account
|
| 18 |
+
3. Go to https://replicate.com/account/api-tokens
|
| 19 |
+
4. Copy your API token
|
| 20 |
+
|
| 21 |
+
### 2. Install Replicate Python Client
|
| 22 |
+
|
| 23 |
+
```bash
|
| 24 |
+
pip install replicate
|
| 25 |
+
```
|
| 26 |
+
|
| 27 |
+
### 3. Create Replicate Backend
|
| 28 |
+
|
| 29 |
+
Create a new file `backend_replicate.py`:
|
| 30 |
+
|
| 31 |
+
```python
|
| 32 |
+
from flask import Flask, request, jsonify
|
| 33 |
+
from flask_cors import CORS
|
| 34 |
+
import replicate
|
| 35 |
+
import os
|
| 36 |
+
from dotenv import load_dotenv
|
| 37 |
+
from datetime import datetime
|
| 38 |
+
import logging
|
| 39 |
+
|
| 40 |
+
load_dotenv()
|
| 41 |
+
|
| 42 |
+
logging.basicConfig(level=logging.INFO)
|
| 43 |
+
logger = logging.getLogger(__name__)
|
| 44 |
+
|
| 45 |
+
app = Flask(__name__)
|
| 46 |
+
CORS(app)
|
| 47 |
+
|
| 48 |
+
# Set your Replicate API token
|
| 49 |
+
REPLICATE_API_TOKEN = os.getenv('REPLICATE_API_TOKEN')
|
| 50 |
+
if REPLICATE_API_TOKEN:
|
| 51 |
+
os.environ['REPLICATE_API_TOKEN'] = REPLICATE_API_TOKEN
|
| 52 |
+
|
| 53 |
+
@app.route('/health', methods=['GET'])
|
| 54 |
+
def health():
|
| 55 |
+
return jsonify({'status': 'healthy', 'service': 'Replicate API'})
|
| 56 |
+
|
| 57 |
+
@app.route('/generate-video', methods=['POST'])
|
| 58 |
+
def generate_video():
|
| 59 |
+
try:
|
| 60 |
+
data = request.json
|
| 61 |
+
prompt = data.get('prompt', '')
|
| 62 |
+
|
| 63 |
+
if not prompt:
|
| 64 |
+
return jsonify({'error': 'Prompt is required'}), 400
|
| 65 |
+
|
| 66 |
+
logger.info(f"Generating video with Hailuo: {prompt[:100]}")
|
| 67 |
+
|
| 68 |
+
# Use the real Hailuo (MiniMax) model on Replicate
|
| 69 |
+
output = replicate.run(
|
| 70 |
+
"minimax/video-01",
|
| 71 |
+
input={
|
| 72 |
+
"prompt": prompt,
|
| 73 |
+
"prompt_optimizer": True
|
| 74 |
+
}
|
| 75 |
+
)
|
| 76 |
+
|
| 77 |
+
# Output is a video URL
|
| 78 |
+
video_url = output if isinstance(output, str) else output[0]
|
| 79 |
+
|
| 80 |
+
logger.info(f"Video generated: {video_url}")
|
| 81 |
+
|
| 82 |
+
return jsonify({
|
| 83 |
+
'video_url': video_url,
|
| 84 |
+
'prompt': prompt,
|
| 85 |
+
'model': 'hailuo-minimax',
|
| 86 |
+
'model_name': 'Hailuo Video-01 (MiniMax)',
|
| 87 |
+
'timestamp': datetime.now().isoformat()
|
| 88 |
+
})
|
| 89 |
+
|
| 90 |
+
except Exception as e:
|
| 91 |
+
logger.error(f"Error: {str(e)}")
|
| 92 |
+
return jsonify({'error': str(e)}), 500
|
| 93 |
+
|
| 94 |
+
if __name__ == '__main__':
|
| 95 |
+
if not REPLICATE_API_TOKEN:
|
| 96 |
+
print("β οΈ Warning: REPLICATE_API_TOKEN not set in .env file")
|
| 97 |
+
print("Get your token from: https://replicate.com/account/api-tokens")
|
| 98 |
+
|
| 99 |
+
app.run(host='0.0.0.0', port=5000, debug=False)
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
### 4. Update .env File
|
| 103 |
+
|
| 104 |
+
Add your Replicate API token to `.env`:
|
| 105 |
+
|
| 106 |
+
```env
|
| 107 |
+
REPLICATE_API_TOKEN=r8_your_token_here
|
| 108 |
+
```
|
| 109 |
+
|
| 110 |
+
### 5. Run the Replicate Backend
|
| 111 |
+
|
| 112 |
+
```bash
|
| 113 |
+
python3 backend_replicate.py
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
### 6. Use with Your Frontend
|
| 117 |
+
|
| 118 |
+
The frontend (`index_enhanced.html`) will work with this backend without any changes!
|
| 119 |
+
|
| 120 |
+
## π¬ Available Models on Replicate
|
| 121 |
+
|
| 122 |
+
### Text-to-Video Models:
|
| 123 |
+
|
| 124 |
+
1. **Hailuo Video-01 (MiniMax)** - The real Hailuo model!
|
| 125 |
+
```python
|
| 126 |
+
replicate.run("minimax/video-01", input={"prompt": "..."})
|
| 127 |
+
```
|
| 128 |
+
|
| 129 |
+
2. **CogVideoX**
|
| 130 |
+
```python
|
| 131 |
+
replicate.run("lucataco/cogvideox-5b", input={"prompt": "..."})
|
| 132 |
+
```
|
| 133 |
+
|
| 134 |
+
3. **Kling v1.6**
|
| 135 |
+
```python
|
| 136 |
+
replicate.run("fofr/kling-v1.6-standard", input={"prompt": "..."})
|
| 137 |
+
```
|
| 138 |
+
|
| 139 |
+
### Image-to-Video Models:
|
| 140 |
+
|
| 141 |
+
1. **Stable Video Diffusion**
|
| 142 |
+
```python
|
| 143 |
+
replicate.run("stability-ai/stable-video-diffusion", input={"image": "..."})
|
| 144 |
+
```
|
| 145 |
+
|
| 146 |
+
2. **Hailuo Image-to-Video**
|
| 147 |
+
```python
|
| 148 |
+
replicate.run("minimax/video-01", input={"image": "...", "prompt": "..."})
|
| 149 |
+
```
|
| 150 |
+
|
| 151 |
+
## π° Pricing
|
| 152 |
+
|
| 153 |
+
- **Free Tier**: $5 free credits (enough for ~50 videos)
|
| 154 |
+
- **Pay-as-you-go**: ~$0.10 per video generation
|
| 155 |
+
- Much cheaper than running your own GPU!
|
| 156 |
+
|
| 157 |
+
## π§ Advanced: Multi-Model Support
|
| 158 |
+
|
| 159 |
+
Create `backend_replicate_multi.py` with multiple models:
|
| 160 |
+
|
| 161 |
+
```python
|
| 162 |
+
MODELS = {
|
| 163 |
+
"hailuo": "minimax/video-01",
|
| 164 |
+
"cogvideox": "lucataco/cogvideox-5b",
|
| 165 |
+
"kling": "fofr/kling-v1.6-standard",
|
| 166 |
+
"svd": "stability-ai/stable-video-diffusion"
|
| 167 |
+
}
|
| 168 |
+
|
| 169 |
+
@app.route('/generate-video', methods=['POST'])
|
| 170 |
+
def generate_video():
|
| 171 |
+
data = request.json
|
| 172 |
+
model_id = data.get('model', 'hailuo')
|
| 173 |
+
prompt = data.get('prompt', '')
|
| 174 |
+
|
| 175 |
+
model_name = MODELS.get(model_id, MODELS['hailuo'])
|
| 176 |
+
|
| 177 |
+
output = replicate.run(model_name, input={"prompt": prompt})
|
| 178 |
+
# ... rest of the code
|
| 179 |
+
```
|
| 180 |
+
|
| 181 |
+
## β
Advantages of Replicate
|
| 182 |
+
|
| 183 |
+
1. **Real Hailuo Model**: The actual MiniMax video-01
|
| 184 |
+
2. **No Space Sleeping**: Always available
|
| 185 |
+
3. **Better Performance**: Optimized infrastructure
|
| 186 |
+
4. **Predictable Costs**: Pay per generation
|
| 187 |
+
5. **Multiple Models**: Access to many models
|
| 188 |
+
6. **Better Support**: Professional API support
|
| 189 |
+
|
| 190 |
+
## π Resources
|
| 191 |
+
|
| 192 |
+
- **Replicate Docs**: https://replicate.com/docs
|
| 193 |
+
- **Hailuo Model**: https://replicate.com/minimax/video-01
|
| 194 |
+
- **Python Client**: https://github.com/replicate/replicate-python
|
| 195 |
+
- **Pricing**: https://replicate.com/pricing
|
| 196 |
+
|
| 197 |
+
## π― Recommendation
|
| 198 |
+
|
| 199 |
+
For **production use** or if Hugging Face Spaces keep failing:
|
| 200 |
+
- β
Use Replicate API
|
| 201 |
+
- β
Get the real Hailuo model
|
| 202 |
+
- β
Reliable and fast
|
| 203 |
+
- β
Only pay for what you use
|
| 204 |
+
|
| 205 |
+
For **testing/development**:
|
| 206 |
+
- Use Demo Mode in the current app
|
| 207 |
+
- Or use Hugging Face Spaces (free but may be slow/unavailable)
|
| 208 |
+
|
| 209 |
+
---
|
| 210 |
+
|
| 211 |
+
**Ready to use the real Hailuo model? Set up Replicate now!** π
|
SETUP.md
ADDED
|
@@ -0,0 +1,130 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π Complete Setup Guide
|
| 2 |
+
|
| 3 |
+
## Prerequisites
|
| 4 |
+
|
| 5 |
+
- Python 3.9 or higher
|
| 6 |
+
- Git
|
| 7 |
+
- 10GB free disk space (for local models)
|
| 8 |
+
|
| 9 |
+
## Step-by-Step Setup
|
| 10 |
+
|
| 11 |
+
### 1. Clone the Repository
|
| 12 |
+
|
| 13 |
+
```bash
|
| 14 |
+
git clone <your-repo-url>
|
| 15 |
+
cd hailuo-clone
|
| 16 |
+
```
|
| 17 |
+
|
| 18 |
+
### 2. Create Virtual Environment
|
| 19 |
+
|
| 20 |
+
```bash
|
| 21 |
+
python3 -m venv venv
|
| 22 |
+
source venv/bin/activate # On Windows: venv\Scripts\activate
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
### 3. Choose Your Backend
|
| 26 |
+
|
| 27 |
+
#### Option A: Replicate API (Recommended)
|
| 28 |
+
|
| 29 |
+
```bash
|
| 30 |
+
# Install dependencies
|
| 31 |
+
pip install -r requirements.txt
|
| 32 |
+
pip install replicate
|
| 33 |
+
|
| 34 |
+
# Get API token from https://replicate.com/account/api-tokens
|
| 35 |
+
# Create .env file
|
| 36 |
+
echo "REPLICATE_API_TOKEN=your_token_here" > .env
|
| 37 |
+
|
| 38 |
+
# Run backend
|
| 39 |
+
python backend_replicate.py
|
| 40 |
+
|
| 41 |
+
# Open index.html in browser
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
#### Option B: Local Generation
|
| 45 |
+
|
| 46 |
+
```bash
|
| 47 |
+
# Install PyTorch (GPU version)
|
| 48 |
+
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
|
| 49 |
+
|
| 50 |
+
# Or CPU version
|
| 51 |
+
pip install torch torchvision torchaudio
|
| 52 |
+
|
| 53 |
+
# Install other dependencies
|
| 54 |
+
pip install -r requirements_local.txt
|
| 55 |
+
|
| 56 |
+
# Run backend
|
| 57 |
+
python backend_local.py
|
| 58 |
+
|
| 59 |
+
# Open index_local.html in browser
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
#### Option C: Hugging Face Spaces
|
| 63 |
+
|
| 64 |
+
```bash
|
| 65 |
+
# Install dependencies
|
| 66 |
+
pip install -r requirements.txt
|
| 67 |
+
|
| 68 |
+
# Run backend
|
| 69 |
+
python backend_enhanced.py
|
| 70 |
+
|
| 71 |
+
# Open index_enhanced.html in browser
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
### 4. Test the Setup
|
| 75 |
+
|
| 76 |
+
1. Open the appropriate HTML file in your browser
|
| 77 |
+
2. Enter a test prompt: "A cat playing with a ball of yarn"
|
| 78 |
+
3. Click "Generate Video"
|
| 79 |
+
4. Wait for the video to generate
|
| 80 |
+
|
| 81 |
+
## Quick Commands
|
| 82 |
+
|
| 83 |
+
### Start Replicate Backend
|
| 84 |
+
```bash
|
| 85 |
+
source venv/bin/activate
|
| 86 |
+
python backend_replicate.py
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
### Start Local Backend
|
| 90 |
+
```bash
|
| 91 |
+
source venv/bin/activate
|
| 92 |
+
python backend_local.py
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
### Start Enhanced Backend
|
| 96 |
+
```bash
|
| 97 |
+
source venv/bin/activate
|
| 98 |
+
python backend_enhanced.py
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
## Troubleshooting
|
| 102 |
+
|
| 103 |
+
### Port Already in Use
|
| 104 |
+
```bash
|
| 105 |
+
# Find process using port 5000
|
| 106 |
+
lsof -i :5000
|
| 107 |
+
|
| 108 |
+
# Kill the process
|
| 109 |
+
kill -9 <PID>
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
### Missing Dependencies
|
| 113 |
+
```bash
|
| 114 |
+
# Reinstall all dependencies
|
| 115 |
+
pip install --upgrade -r requirements.txt
|
| 116 |
+
```
|
| 117 |
+
|
| 118 |
+
### API Token Issues
|
| 119 |
+
```bash
|
| 120 |
+
# Verify .env file exists
|
| 121 |
+
cat .env
|
| 122 |
+
|
| 123 |
+
# Should show: REPLICATE_API_TOKEN=your_token_here
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
## Next Steps
|
| 127 |
+
|
| 128 |
+
- Read [README_GITHUB.md](README_GITHUB.md) for full documentation
|
| 129 |
+
- Check [SOLUTION_GUIDE.md](SOLUTION_GUIDE.md) for troubleshooting
|
| 130 |
+
- See [models_config.py](models_config.py) to customize models
|
SOLUTION_GUIDE.md
ADDED
|
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π― Complete Solution Guide - Video Generation
|
| 2 |
+
|
| 3 |
+
## Problem: Model Providers Unreachable
|
| 4 |
+
|
| 5 |
+
Hugging Face Spaces are often sleeping, overloaded, or have changed APIs. Here are your **3 working options**:
|
| 6 |
+
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
## β
Option 1: Replicate API (RECOMMENDED - Most Reliable)
|
| 10 |
+
|
| 11 |
+
### Why Replicate?
|
| 12 |
+
- β
**Most reliable** - Professional API service
|
| 13 |
+
- β
**Real Hailuo model** available (minimax/video-01)
|
| 14 |
+
- β
**Fast** - 30-60 seconds per video
|
| 15 |
+
- β
**High quality** - Access to best models
|
| 16 |
+
- β
**Pay as you go** - Only pay for what you use (~$0.05-0.10 per video)
|
| 17 |
+
|
| 18 |
+
### Setup (5 minutes):
|
| 19 |
+
|
| 20 |
+
1. **Sign up for Replicate** (Free account):
|
| 21 |
+
- Go to: https://replicate.com
|
| 22 |
+
- Sign up with GitHub or email
|
| 23 |
+
|
| 24 |
+
2. **Get API Token**:
|
| 25 |
+
- Visit: https://replicate.com/account/api-tokens
|
| 26 |
+
- Click "Create token"
|
| 27 |
+
- Copy your token
|
| 28 |
+
|
| 29 |
+
3. **Add token to .env file**:
|
| 30 |
+
```bash
|
| 31 |
+
cd /Users/sravyalu/VideoAI/hailuo-clone
|
| 32 |
+
echo "REPLICATE_API_TOKEN=your_token_here" > .env
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
4. **Install Replicate**:
|
| 36 |
+
```bash
|
| 37 |
+
/Users/sravyalu/VideoAI/.venv/bin/pip install replicate
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
5. **Run the backend**:
|
| 41 |
+
```bash
|
| 42 |
+
/Users/sravyalu/VideoAI/.venv/bin/python backend_replicate.py
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
6. **Open the UI**:
|
| 46 |
+
- Open `index.html` or `index_enhanced.html`
|
| 47 |
+
- Generate videos instantly!
|
| 48 |
+
|
| 49 |
+
### Available Models on Replicate:
|
| 50 |
+
- **Hailuo Video-01** (minimax/video-01) - The REAL Hailuo model! π₯
|
| 51 |
+
- **CogVideoX-5B** (lucataco/cogvideox-5b)
|
| 52 |
+
- **Stable Video Diffusion**
|
| 53 |
+
- Many more...
|
| 54 |
+
|
| 55 |
+
### Cost:
|
| 56 |
+
- ~$0.05-0.10 per video (very affordable)
|
| 57 |
+
- Free credits for new accounts
|
| 58 |
+
- Only pay for successful generations
|
| 59 |
+
|
| 60 |
+
---
|
| 61 |
+
|
| 62 |
+
## β
Option 2: Local Generation (Free but Slow)
|
| 63 |
+
|
| 64 |
+
### When to Use:
|
| 65 |
+
- You have a powerful GPU (RTX 3060+)
|
| 66 |
+
- You want 100% free (no API costs)
|
| 67 |
+
- You need complete privacy
|
| 68 |
+
|
| 69 |
+
### Setup:
|
| 70 |
+
```bash
|
| 71 |
+
# Already set up! Just run:
|
| 72 |
+
/Users/sravyalu/VideoAI/.venv/bin/python backend_local.py
|
| 73 |
+
|
| 74 |
+
# Open index_local.html
|
| 75 |
+
open index_local.html
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
### Pros:
|
| 79 |
+
- β
Completely free
|
| 80 |
+
- β
No API keys needed
|
| 81 |
+
- β
Works offline
|
| 82 |
+
- β
Private
|
| 83 |
+
|
| 84 |
+
### Cons:
|
| 85 |
+
- β Very slow on CPU (5-10 minutes per video)
|
| 86 |
+
- β Requires 5GB download
|
| 87 |
+
- β Limited to one model
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
## β
Option 3: Alternative Free APIs
|
| 92 |
+
|
| 93 |
+
### RunPod, Banana, or Modal:
|
| 94 |
+
These are alternatives to Replicate with similar pricing/features.
|
| 95 |
+
|
| 96 |
+
### Stability AI:
|
| 97 |
+
If you want Stable Video Diffusion specifically.
|
| 98 |
+
|
| 99 |
+
---
|
| 100 |
+
|
| 101 |
+
## π― My Recommendation
|
| 102 |
+
|
| 103 |
+
**Use Replicate API** because:
|
| 104 |
+
|
| 105 |
+
1. **It actually works** - No "model unreachable" errors
|
| 106 |
+
2. **Real Hailuo model** - The actual minimax/video-01 model
|
| 107 |
+
3. **Very affordable** - ~$0.05-0.10 per video
|
| 108 |
+
4. **Fast** - 30-60 seconds
|
| 109 |
+
5. **Reliable** - Professional service with 99.9% uptime
|
| 110 |
+
|
| 111 |
+
### Quick Start with Replicate:
|
| 112 |
+
|
| 113 |
+
```bash
|
| 114 |
+
# 1. Install replicate
|
| 115 |
+
/Users/sravyalu/VideoAI/.venv/bin/pip install replicate
|
| 116 |
+
|
| 117 |
+
# 2. Add your token to .env
|
| 118 |
+
echo "REPLICATE_API_TOKEN=r8_your_token_here" > .env
|
| 119 |
+
|
| 120 |
+
# 3. Run backend
|
| 121 |
+
/Users/sravyalu/VideoAI/.venv/bin/python backend_replicate.py
|
| 122 |
+
|
| 123 |
+
# 4. Open UI
|
| 124 |
+
open index.html
|
| 125 |
+
```
|
| 126 |
+
|
| 127 |
+
That's it! Generate videos in seconds! π
|
| 128 |
+
|
| 129 |
+
---
|
| 130 |
+
|
| 131 |
+
## π Comparison Table
|
| 132 |
+
|
| 133 |
+
| Feature | Replicate | HF Spaces | Local |
|
| 134 |
+
|---------|-----------|-----------|-------|
|
| 135 |
+
| **Reliability** | βββββ | ββ | βββββ |
|
| 136 |
+
| **Speed** | 30-60s | 30-60s (when working) | 5-10min (CPU) |
|
| 137 |
+
| **Setup Time** | 5 min | Instant | 30 min |
|
| 138 |
+
| **Cost** | $0.05-0.10/video | Free | Free |
|
| 139 |
+
| **Quality** | Excellent | Good-Excellent | Good |
|
| 140 |
+
| **Hailuo Model** | β
Yes | β No | β No |
|
| 141 |
+
| **Uptime** | 99.9% | ~50% | 100% |
|
| 142 |
+
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+
## π Troubleshooting
|
| 146 |
+
|
| 147 |
+
### "Model provider unreachable"
|
| 148 |
+
β Use Replicate API instead of Hugging Face Spaces
|
| 149 |
+
|
| 150 |
+
### "No API token"
|
| 151 |
+
β Sign up at replicate.com and add token to .env
|
| 152 |
+
|
| 153 |
+
### "Too slow"
|
| 154 |
+
β Don't use local generation on CPU, use Replicate
|
| 155 |
+
|
| 156 |
+
### "Too expensive"
|
| 157 |
+
β Use local generation with GPU, or wait for HF Spaces to work
|
| 158 |
+
|
| 159 |
+
---
|
| 160 |
+
|
| 161 |
+
## π Next Steps
|
| 162 |
+
|
| 163 |
+
1. **Sign up for Replicate** (5 minutes)
|
| 164 |
+
2. **Get your API token**
|
| 165 |
+
3. **Add to .env file**
|
| 166 |
+
4. **Run backend_replicate.py**
|
| 167 |
+
5. **Generate amazing videos!** π¬β¨
|
| 168 |
+
|
| 169 |
+
---
|
| 170 |
+
|
| 171 |
+
**The Replicate solution is the most reliable and gives you access to the REAL Hailuo model!**
|
VERCEL_DEPLOY_GUIDE.md
ADDED
|
@@ -0,0 +1,286 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π Vercel Deployment Guide
|
| 2 |
+
|
| 3 |
+
## β
Your Project is Ready for Vercel!
|
| 4 |
+
|
| 5 |
+
All files have been created and pushed to GitHub. You now have:
|
| 6 |
+
- β
Serverless API functions in `/api`
|
| 7 |
+
- β
Frontend with relative API calls
|
| 8 |
+
- β
Support for 5 AI models (including 10s videos!)
|
| 9 |
+
- β
package.json with dependencies
|
| 10 |
+
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
## π Quick Deploy (5 minutes)
|
| 14 |
+
|
| 15 |
+
### Method 1: Vercel Dashboard (Easiest)
|
| 16 |
+
|
| 17 |
+
1. **Go to Vercel**
|
| 18 |
+
- Visit: https://vercel.com
|
| 19 |
+
- Sign in with GitHub
|
| 20 |
+
|
| 21 |
+
2. **Import Project**
|
| 22 |
+
- Click "Add New" β "Project"
|
| 23 |
+
- Select "Import Git Repository"
|
| 24 |
+
- Choose: `LakshmiSravya123/VideoAI`
|
| 25 |
+
|
| 26 |
+
3. **Configure Project**
|
| 27 |
+
- **Root Directory:** `hailuo-clone` β οΈ IMPORTANT!
|
| 28 |
+
- **Framework Preset:** None
|
| 29 |
+
- **Build Command:** (leave empty)
|
| 30 |
+
- **Output Directory:** (leave empty)
|
| 31 |
+
|
| 32 |
+
4. **Add Environment Variable**
|
| 33 |
+
- Click "Environment Variables"
|
| 34 |
+
- Key: `REPLICATE_API_TOKEN`
|
| 35 |
+
- Value: `r8_YOUR_TOKEN_HERE` (from https://replicate.com/account/api-tokens)
|
| 36 |
+
- Apply to: Production, Preview, Development
|
| 37 |
+
|
| 38 |
+
5. **Deploy**
|
| 39 |
+
- Click "Deploy"
|
| 40 |
+
- Wait 1-2 minutes
|
| 41 |
+
- You'll get a URL like: `https://video-ai-xyz.vercel.app`
|
| 42 |
+
|
| 43 |
+
6. **Test**
|
| 44 |
+
- Visit: `https://your-url.vercel.app/index_enhanced.html`
|
| 45 |
+
- Try generating a video!
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
### Method 2: Vercel CLI (Advanced)
|
| 50 |
+
|
| 51 |
+
```bash
|
| 52 |
+
# Install Vercel CLI
|
| 53 |
+
npm install -g vercel
|
| 54 |
+
|
| 55 |
+
# Navigate to project
|
| 56 |
+
cd /Users/sravyalu/VideoAI/hailuo-clone
|
| 57 |
+
|
| 58 |
+
# Deploy
|
| 59 |
+
vercel
|
| 60 |
+
|
| 61 |
+
# Follow prompts:
|
| 62 |
+
# - Link to existing project or create new
|
| 63 |
+
# - Confirm settings
|
| 64 |
+
# - Add REPLICATE_API_TOKEN when prompted
|
| 65 |
+
|
| 66 |
+
# Deploy to production
|
| 67 |
+
vercel --prod
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
---
|
| 71 |
+
|
| 72 |
+
## π¬ Available Models After Deployment
|
| 73 |
+
|
| 74 |
+
Your deployed app will have:
|
| 75 |
+
|
| 76 |
+
1. **Runway Gen-3** - 10 seconds β (longest!)
|
| 77 |
+
2. **Hailuo Video-01** - 6 seconds (best quality/price)
|
| 78 |
+
3. **CogVideoX-5B** - 6 seconds (good balance)
|
| 79 |
+
4. **HunyuanVideo** - 5+ seconds (SOTA)
|
| 80 |
+
5. **Luma Dream Machine** - 5 seconds (cinematic)
|
| 81 |
+
|
| 82 |
+
---
|
| 83 |
+
|
| 84 |
+
## π§ Post-Deployment Setup
|
| 85 |
+
|
| 86 |
+
### 1. Set Custom Domain (Optional)
|
| 87 |
+
- Vercel Dashboard β Your Project β Settings β Domains
|
| 88 |
+
- Add: `video.yourdomain.com`
|
| 89 |
+
- Update DNS as instructed
|
| 90 |
+
|
| 91 |
+
### 2. Make index_enhanced.html the Homepage
|
| 92 |
+
Option A: Rename file
|
| 93 |
+
```bash
|
| 94 |
+
cd /Users/sravyalu/VideoAI/hailuo-clone
|
| 95 |
+
mv index_enhanced.html index.html
|
| 96 |
+
git add -A && git commit -m "Make enhanced UI the homepage" && git push
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
Option B: Add vercel.json
|
| 100 |
+
```json
|
| 101 |
+
{
|
| 102 |
+
"rewrites": [
|
| 103 |
+
{ "source": "/", "destination": "/index_enhanced.html" }
|
| 104 |
+
]
|
| 105 |
+
}
|
| 106 |
+
```
|
| 107 |
+
|
| 108 |
+
### 3. Enable Analytics (Optional)
|
| 109 |
+
- Vercel Dashboard β Your Project β Analytics
|
| 110 |
+
- Enable Web Analytics
|
| 111 |
+
- Track usage and performance
|
| 112 |
+
|
| 113 |
+
---
|
| 114 |
+
|
| 115 |
+
## π API Endpoints
|
| 116 |
+
|
| 117 |
+
Your deployed app will have:
|
| 118 |
+
|
| 119 |
+
- `GET /api/health` - Check API status
|
| 120 |
+
- `GET /api/models` - List available models
|
| 121 |
+
- `POST /api/generate-video` - Generate videos
|
| 122 |
+
|
| 123 |
+
### Example API Call:
|
| 124 |
+
```bash
|
| 125 |
+
curl -X POST https://your-url.vercel.app/api/generate-video \
|
| 126 |
+
-H "Content-Type: application/json" \
|
| 127 |
+
-d '{
|
| 128 |
+
"prompt": "A golden retriever running through flowers at sunset",
|
| 129 |
+
"model": "runway"
|
| 130 |
+
}'
|
| 131 |
+
```
|
| 132 |
+
|
| 133 |
+
---
|
| 134 |
+
|
| 135 |
+
## π° Cost Estimation
|
| 136 |
+
|
| 137 |
+
### Vercel Costs:
|
| 138 |
+
- **Hobby Plan:** FREE
|
| 139 |
+
- 100GB bandwidth/month
|
| 140 |
+
- Unlimited requests
|
| 141 |
+
- Serverless functions included
|
| 142 |
+
|
| 143 |
+
### Replicate Costs (per video):
|
| 144 |
+
- **Runway Gen-3 (10s):** ~$0.20
|
| 145 |
+
- **Hailuo (6s):** ~$0.08
|
| 146 |
+
- **CogVideoX (6s):** ~$0.05
|
| 147 |
+
- **HunyuanVideo (5s):** ~$0.10
|
| 148 |
+
- **Luma (5s):** ~$0.10
|
| 149 |
+
|
| 150 |
+
**Example Monthly Cost:**
|
| 151 |
+
- 100 videos with Hailuo: ~$8
|
| 152 |
+
- 100 videos with Runway: ~$20
|
| 153 |
+
- Vercel hosting: FREE
|
| 154 |
+
|
| 155 |
+
---
|
| 156 |
+
|
| 157 |
+
## π Troubleshooting
|
| 158 |
+
|
| 159 |
+
### Issue: 404 on /api/generate-video
|
| 160 |
+
**Solution:**
|
| 161 |
+
- Ensure Root Directory is set to `hailuo-clone`
|
| 162 |
+
- Check that `package.json` exists in that directory
|
| 163 |
+
- Redeploy
|
| 164 |
+
|
| 165 |
+
### Issue: 500 "Missing REPLICATE_API_TOKEN"
|
| 166 |
+
**Solution:**
|
| 167 |
+
- Add environment variable in Vercel Dashboard
|
| 168 |
+
- Settings β Environment Variables
|
| 169 |
+
- Redeploy after adding
|
| 170 |
+
|
| 171 |
+
### Issue: Functions timeout
|
| 172 |
+
**Solution:**
|
| 173 |
+
- Video generation can take 30-120 seconds
|
| 174 |
+
- Vercel Hobby: 10s timeout (may fail for long videos)
|
| 175 |
+
- Upgrade to Pro for 60s timeout
|
| 176 |
+
- Or use webhook/polling pattern
|
| 177 |
+
|
| 178 |
+
### Issue: CORS errors
|
| 179 |
+
**Solution:**
|
| 180 |
+
- Should not happen (same-origin)
|
| 181 |
+
- If using custom domain, ensure it's properly configured
|
| 182 |
+
|
| 183 |
+
### Issue: Slow first request
|
| 184 |
+
**Solution:**
|
| 185 |
+
- Cold start is normal (2-5 seconds)
|
| 186 |
+
- Subsequent requests are faster
|
| 187 |
+
- Consider keeping functions warm with cron job
|
| 188 |
+
|
| 189 |
+
---
|
| 190 |
+
|
| 191 |
+
## π Security Best Practices
|
| 192 |
+
|
| 193 |
+
### β
Already Implemented:
|
| 194 |
+
- API token in environment variables (not in code)
|
| 195 |
+
- .gitignore excludes .env files
|
| 196 |
+
- Serverless functions are isolated
|
| 197 |
+
|
| 198 |
+
### π― Recommended:
|
| 199 |
+
1. **Rate Limiting:** Add rate limiting to prevent abuse
|
| 200 |
+
2. **Authentication:** Add user auth for production
|
| 201 |
+
3. **Input Validation:** Already implemented in API
|
| 202 |
+
4. **Monitoring:** Enable Vercel Analytics
|
| 203 |
+
|
| 204 |
+
---
|
| 205 |
+
|
| 206 |
+
## π Scaling
|
| 207 |
+
|
| 208 |
+
### Current Setup:
|
| 209 |
+
- β
Serverless (auto-scales)
|
| 210 |
+
- β
No server management
|
| 211 |
+
- β
Pay per use
|
| 212 |
+
|
| 213 |
+
### For High Traffic:
|
| 214 |
+
1. **Add Caching:** Cache model metadata
|
| 215 |
+
2. **Add Queue:** Use queue for video generation
|
| 216 |
+
3. **Add Database:** Store generation history
|
| 217 |
+
4. **Add CDN:** Serve videos via CDN
|
| 218 |
+
|
| 219 |
+
---
|
| 220 |
+
|
| 221 |
+
## π― Next Steps After Deployment
|
| 222 |
+
|
| 223 |
+
1. **Test All Models**
|
| 224 |
+
- Try each model (runway, hailuo, cogvideox, etc.)
|
| 225 |
+
- Test different prompts
|
| 226 |
+
- Verify video quality
|
| 227 |
+
|
| 228 |
+
2. **Share Your App**
|
| 229 |
+
- Share the Vercel URL
|
| 230 |
+
- Add to your portfolio
|
| 231 |
+
- Share on social media
|
| 232 |
+
|
| 233 |
+
3. **Monitor Usage**
|
| 234 |
+
- Check Vercel Analytics
|
| 235 |
+
- Monitor Replicate costs
|
| 236 |
+
- Track popular models
|
| 237 |
+
|
| 238 |
+
4. **Iterate**
|
| 239 |
+
- Add more features
|
| 240 |
+
- Improve UI
|
| 241 |
+
- Add user accounts
|
| 242 |
+
|
| 243 |
+
---
|
| 244 |
+
|
| 245 |
+
## π Deployment Checklist
|
| 246 |
+
|
| 247 |
+
Before deploying:
|
| 248 |
+
- [x] API functions created (`/api/*.js`)
|
| 249 |
+
- [x] package.json with dependencies
|
| 250 |
+
- [x] Frontend updated to use `/api` endpoints
|
| 251 |
+
- [x] .gitignore excludes sensitive files
|
| 252 |
+
- [x] All changes pushed to GitHub
|
| 253 |
+
- [ ] Replicate API token ready
|
| 254 |
+
- [ ] Vercel account created
|
| 255 |
+
- [ ] Project imported to Vercel
|
| 256 |
+
- [ ] Environment variable added
|
| 257 |
+
- [ ] Deployment successful
|
| 258 |
+
- [ ] Test video generation works
|
| 259 |
+
|
| 260 |
+
---
|
| 261 |
+
|
| 262 |
+
## π You're Ready!
|
| 263 |
+
|
| 264 |
+
Your project is fully prepared for Vercel deployment with:
|
| 265 |
+
- β
5 AI models (up to 10s videos)
|
| 266 |
+
- β
Beautiful animated UI
|
| 267 |
+
- β
Serverless architecture
|
| 268 |
+
- β
Complete documentation
|
| 269 |
+
- β
GitHub repository
|
| 270 |
+
|
| 271 |
+
**Just deploy and start generating videos! πβ¨**
|
| 272 |
+
|
| 273 |
+
---
|
| 274 |
+
|
| 275 |
+
## π Support
|
| 276 |
+
|
| 277 |
+
If you encounter issues:
|
| 278 |
+
1. Check this guide's troubleshooting section
|
| 279 |
+
2. Review Vercel logs in Dashboard
|
| 280 |
+
3. Check GitHub Issues
|
| 281 |
+
4. Vercel Discord: https://vercel.com/discord
|
| 282 |
+
|
| 283 |
+
**Your deployment URL will be:**
|
| 284 |
+
`https://video-ai-[random].vercel.app`
|
| 285 |
+
|
| 286 |
+
(You can customize this in Vercel settings)
|
VideoAI_Free_Colab.ipynb
ADDED
|
@@ -0,0 +1,340 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"cells": [
|
| 3 |
+
{
|
| 4 |
+
"cell_type": "markdown",
|
| 5 |
+
"metadata": {
|
| 6 |
+
"id": "header"
|
| 7 |
+
},
|
| 8 |
+
"source": [
|
| 9 |
+
"# π¬ VideoAI - Free Video Generation\n",
|
| 10 |
+
"\n",
|
| 11 |
+
"## 100% Free AI Video Generation using Google Colab!\n",
|
| 12 |
+
"\n",
|
| 13 |
+
"**Features:**\n",
|
| 14 |
+
"- β
Completely Free\n",
|
| 15 |
+
"- β
No Credit Card Required\n",
|
| 16 |
+
"- β
High Quality Videos\n",
|
| 17 |
+
"- β
Multiple Models\n",
|
| 18 |
+
"\n",
|
| 19 |
+
"**Instructions:**\n",
|
| 20 |
+
"1. Click Runtime β Run all\n",
|
| 21 |
+
"2. Wait 2-3 minutes for setup\n",
|
| 22 |
+
"3. Enter your prompt in the last cell\n",
|
| 23 |
+
"4. Generate and download your video!\n",
|
| 24 |
+
"\n",
|
| 25 |
+
"---"
|
| 26 |
+
]
|
| 27 |
+
},
|
| 28 |
+
{
|
| 29 |
+
"cell_type": "code",
|
| 30 |
+
"execution_count": 1,
|
| 31 |
+
"metadata": {
|
| 32 |
+
"id": "install"
|
| 33 |
+
},
|
| 34 |
+
"outputs": [
|
| 35 |
+
{
|
| 36 |
+
"name": "stdout",
|
| 37 |
+
"output_type": "stream",
|
| 38 |
+
"text": [
|
| 39 |
+
"π¦ Installing dependencies...\n",
|
| 40 |
+
"zsh:1: no matches found: imageio[ffmpeg]\n",
|
| 41 |
+
"β
Dependencies installed!\n"
|
| 42 |
+
]
|
| 43 |
+
}
|
| 44 |
+
],
|
| 45 |
+
"source": [
|
| 46 |
+
"# Step 1: Install Dependencies\n",
|
| 47 |
+
"print(\"π¦ Installing dependencies...\")\n",
|
| 48 |
+
"!pip install -q diffusers transformers accelerate imageio[ffmpeg] flask flask-cors pyngrok\n",
|
| 49 |
+
"print(\"β
Dependencies installed!\")"
|
| 50 |
+
]
|
| 51 |
+
},
|
| 52 |
+
{
|
| 53 |
+
"cell_type": "code",
|
| 54 |
+
"execution_count": 2,
|
| 55 |
+
"metadata": {
|
| 56 |
+
"id": "imports"
|
| 57 |
+
},
|
| 58 |
+
"outputs": [
|
| 59 |
+
{
|
| 60 |
+
"name": "stderr",
|
| 61 |
+
"output_type": "stream",
|
| 62 |
+
"text": [
|
| 63 |
+
"/Users/sravyalu/VideoAI/.venv/lib/python3.13/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
|
| 64 |
+
" from .autonotebook import tqdm as notebook_tqdm\n"
|
| 65 |
+
]
|
| 66 |
+
},
|
| 67 |
+
{
|
| 68 |
+
"name": "stdout",
|
| 69 |
+
"output_type": "stream",
|
| 70 |
+
"text": [
|
| 71 |
+
"β
Libraries imported!\n",
|
| 72 |
+
"π₯ GPU Available: False\n"
|
| 73 |
+
]
|
| 74 |
+
}
|
| 75 |
+
],
|
| 76 |
+
"source": [
|
| 77 |
+
"# Step 2: Import Libraries\n",
|
| 78 |
+
"import torch\n",
|
| 79 |
+
"from diffusers import CogVideoXPipeline\n",
|
| 80 |
+
"from diffusers.utils import export_to_video\n",
|
| 81 |
+
"import imageio\n",
|
| 82 |
+
"from IPython.display import Video, display\n",
|
| 83 |
+
"import os\n",
|
| 84 |
+
"\n",
|
| 85 |
+
"print(\"β
Libraries imported!\")\n",
|
| 86 |
+
"print(f\"π₯ GPU Available: {torch.cuda.is_available()}\")\n",
|
| 87 |
+
"if torch.cuda.is_available():\n",
|
| 88 |
+
" print(f\" GPU Name: {torch.cuda.get_device_name(0)}\")"
|
| 89 |
+
]
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"cell_type": "code",
|
| 93 |
+
"execution_count": 3,
|
| 94 |
+
"metadata": {
|
| 95 |
+
"id": "load_model"
|
| 96 |
+
},
|
| 97 |
+
"outputs": [
|
| 98 |
+
{
|
| 99 |
+
"name": "stdout",
|
| 100 |
+
"output_type": "stream",
|
| 101 |
+
"text": [
|
| 102 |
+
"π€ Loading CogVideoX-2B model...\n",
|
| 103 |
+
"β³ This may take 2-3 minutes...\n"
|
| 104 |
+
]
|
| 105 |
+
},
|
| 106 |
+
{
|
| 107 |
+
"name": "stderr",
|
| 108 |
+
"output_type": "stream",
|
| 109 |
+
"text": [
|
| 110 |
+
"Fetching 14 files: 100%|ββββββββββ| 14/14 [02:32<00:00, 10.88s/it]\n",
|
| 111 |
+
"Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s]`torch_dtype` is deprecated! Use `dtype` instead!\n",
|
| 112 |
+
"Loading checkpoint shards: 100%|ββββββββββ| 2/2 [00:03<00:00, 1.89s/it]\n",
|
| 113 |
+
"Loading pipeline components...: 80%|ββββββββ | 4/5 [00:06<00:01, 1.53s/it]\n"
|
| 114 |
+
]
|
| 115 |
+
},
|
| 116 |
+
{
|
| 117 |
+
"ename": "ValueError",
|
| 118 |
+
"evalue": "The component <class 'transformers.models.t5.tokenization_t5._LazyModule.__getattr__.<locals>.Placeholder'> of <class 'diffusers.pipelines.cogvideo.pipeline_cogvideox.CogVideoXPipeline'> cannot be loaded as it does not seem to have any of the loading methods defined in {'ModelMixin': ['save_pretrained', 'from_pretrained'], 'SchedulerMixin': ['save_pretrained', 'from_pretrained'], 'DiffusionPipeline': ['save_pretrained', 'from_pretrained'], 'OnnxRuntimeModel': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizer': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizerFast': ['save_pretrained', 'from_pretrained'], 'PreTrainedModel': ['save_pretrained', 'from_pretrained'], 'FeatureExtractionMixin': ['save_pretrained', 'from_pretrained'], 'ProcessorMixin': ['save_pretrained', 'from_pretrained'], 'ImageProcessingMixin': ['save_pretrained', 'from_pretrained'], 'ORTModule': ['save_pretrained', 'from_pretrained']}.",
|
| 119 |
+
"output_type": "error",
|
| 120 |
+
"traceback": [
|
| 121 |
+
"\u001b[31m---------------------------------------------------------------------------\u001b[39m",
|
| 122 |
+
"\u001b[31mValueError\u001b[39m Traceback (most recent call last)",
|
| 123 |
+
"\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[3]\u001b[39m\u001b[32m, line 5\u001b[39m\n\u001b[32m 2\u001b[39m \u001b[38;5;28mprint\u001b[39m(\u001b[33m\"\u001b[39m\u001b[33mπ€ Loading CogVideoX-2B model...\u001b[39m\u001b[33m\"\u001b[39m)\n\u001b[32m 3\u001b[39m \u001b[38;5;28mprint\u001b[39m(\u001b[33m\"\u001b[39m\u001b[33mβ³ This may take 2-3 minutes...\u001b[39m\u001b[33m\"\u001b[39m)\n\u001b[32m----> \u001b[39m\u001b[32m5\u001b[39m pipe = \u001b[43mCogVideoXPipeline\u001b[49m\u001b[43m.\u001b[49m\u001b[43mfrom_pretrained\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m 6\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mTHUDM/CogVideoX-2b\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[32m 7\u001b[39m \u001b[43m \u001b[49m\u001b[43mtorch_dtype\u001b[49m\u001b[43m=\u001b[49m\u001b[43mtorch\u001b[49m\u001b[43m.\u001b[49m\u001b[43mfloat16\u001b[49m\n\u001b[32m 8\u001b[39m \u001b[43m)\u001b[49m\n\u001b[32m 10\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m torch.cuda.is_available():\n\u001b[32m 11\u001b[39m pipe.to(\u001b[33m\"\u001b[39m\u001b[33mcuda\u001b[39m\u001b[33m\"\u001b[39m)\n",
|
| 124 |
+
"\u001b[36mFile \u001b[39m\u001b[32m~/VideoAI/.venv/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py:114\u001b[39m, in \u001b[36mvalidate_hf_hub_args.<locals>._inner_fn\u001b[39m\u001b[34m(*args, **kwargs)\u001b[39m\n\u001b[32m 111\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m check_use_auth_token:\n\u001b[32m 112\u001b[39m kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.\u001b[34m__name__\u001b[39m, has_token=has_token, kwargs=kwargs)\n\u001b[32m--> \u001b[39m\u001b[32m114\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mfn\u001b[49m\u001b[43m(\u001b[49m\u001b[43m*\u001b[49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43m*\u001b[49m\u001b[43m*\u001b[49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
|
| 125 |
+
"\u001b[36mFile \u001b[39m\u001b[32m~/VideoAI/.venv/lib/python3.13/site-packages/diffusers/pipelines/pipeline_utils.py:1025\u001b[39m, in \u001b[36mDiffusionPipeline.from_pretrained\u001b[39m\u001b[34m(cls, pretrained_model_name_or_path, **kwargs)\u001b[39m\n\u001b[32m 1018\u001b[39m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[32m 1019\u001b[39m \u001b[38;5;66;03m# load sub model\u001b[39;00m\n\u001b[32m 1020\u001b[39m sub_model_dtype = (\n\u001b[32m 1021\u001b[39m torch_dtype.get(name, torch_dtype.get(\u001b[33m\"\u001b[39m\u001b[33mdefault\u001b[39m\u001b[33m\"\u001b[39m, torch.float32))\n\u001b[32m 1022\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(torch_dtype, \u001b[38;5;28mdict\u001b[39m)\n\u001b[32m 1023\u001b[39m \u001b[38;5;28;01melse\u001b[39;00m torch_dtype\n\u001b[32m 1024\u001b[39m )\n\u001b[32m-> \u001b[39m\u001b[32m1025\u001b[39m loaded_sub_model = \u001b[43mload_sub_model\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m 1026\u001b[39m \u001b[43m \u001b[49m\u001b[43mlibrary_name\u001b[49m\u001b[43m=\u001b[49m\u001b[43mlibrary_name\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1027\u001b[39m \u001b[43m \u001b[49m\u001b[43mclass_name\u001b[49m\u001b[43m=\u001b[49m\u001b[43mclass_name\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1028\u001b[39m \u001b[43m \u001b[49m\u001b[43mimportable_classes\u001b[49m\u001b[43m=\u001b[49m\u001b[43mimportable_classes\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1029\u001b[39m \u001b[43m \u001b[49m\u001b[43mpipelines\u001b[49m\u001b[43m=\u001b[49m\u001b[43mpipelines\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1030\u001b[39m \u001b[43m \u001b[49m\u001b[43mis_pipeline_module\u001b[49m\u001b[43m=\u001b[49m\u001b[43mis_pipeline_module\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1031\u001b[39m \u001b[43m \u001b[49m\u001b[43mpipeline_class\u001b[49m\u001b[43m=\u001b[49m\u001b[43mpipeline_class\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1032\u001b[39m \u001b[43m \u001b[49m\u001b[43mtorch_dtype\u001b[49m\u001b[43m=\u001b[49m\u001b[43msub_model_dtype\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1033\u001b[39m \u001b[43m \u001b[49m\u001b[43mprovider\u001b[49m\u001b[43m=\u001b[49m\u001b[43mprovider\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1034\u001b[39m \u001b[43m \u001b[49m\u001b[43msess_options\u001b[49m\u001b[43m=\u001b[49m\u001b[43msess_options\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1035\u001b[39m \u001b[43m \u001b[49m\u001b[43mdevice_map\u001b[49m\u001b[43m=\u001b[49m\u001b[43mcurrent_device_map\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1036\u001b[39m \u001b[43m \u001b[49m\u001b[43mmax_memory\u001b[49m\u001b[43m=\u001b[49m\u001b[43mmax_memory\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1037\u001b[39m \u001b[43m \u001b[49m\u001b[43moffload_folder\u001b[49m\u001b[43m=\u001b[49m\u001b[43moffload_folder\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1038\u001b[39m \u001b[43m \u001b[49m\u001b[43moffload_state_dict\u001b[49m\u001b[43m=\u001b[49m\u001b[43moffload_state_dict\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1039\u001b[39m \u001b[43m \u001b[49m\u001b[43mmodel_variants\u001b[49m\u001b[43m=\u001b[49m\u001b[43mmodel_variants\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1040\u001b[39m \u001b[43m \u001b[49m\u001b[43mname\u001b[49m\u001b[43m=\u001b[49m\u001b[43mname\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1041\u001b[39m \u001b[43m \u001b[49m\u001b[43mfrom_flax\u001b[49m\u001b[43m=\u001b[49m\u001b[43mfrom_flax\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1042\u001b[39m \u001b[43m \u001b[49m\u001b[43mvariant\u001b[49m\u001b[43m=\u001b[49m\u001b[43mvariant\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1043\u001b[39m \u001b[43m \u001b[49m\u001b[43mlow_cpu_mem_usage\u001b[49m\u001b[43m=\u001b[49m\u001b[43mlow_cpu_mem_usage\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1044\u001b[39m \u001b[43m \u001b[49m\u001b[43mcached_folder\u001b[49m\u001b[43m=\u001b[49m\u001b[43mcached_folder\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1045\u001b[39m \u001b[43m \u001b[49m\u001b[43muse_safetensors\u001b[49m\u001b[43m=\u001b[49m\u001b[43muse_safetensors\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1046\u001b[39m \u001b[43m \u001b[49m\u001b[43mdduf_entries\u001b[49m\u001b[43m=\u001b[49m\u001b[43mdduf_entries\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1047\u001b[39m \u001b[43m \u001b[49m\u001b[43mprovider_options\u001b[49m\u001b[43m=\u001b[49m\u001b[43mprovider_options\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1048\u001b[39m \u001b[43m \u001b[49m\u001b[43mquantization_config\u001b[49m\u001b[43m=\u001b[49m\u001b[43mquantization_config\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 1049\u001b[39m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[32m 1050\u001b[39m logger.info(\n\u001b[32m 1051\u001b[39m \u001b[33mf\u001b[39m\u001b[33m\"\u001b[39m\u001b[33mLoaded \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mname\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m as \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mclass_name\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m from `\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mname\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m` subfolder of \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mpretrained_model_name_or_path\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m.\u001b[39m\u001b[33m\"\u001b[39m\n\u001b[32m 1052\u001b[39m )\n\u001b[32m 1054\u001b[39m init_kwargs[name] = loaded_sub_model \u001b[38;5;66;03m# UNet(...), # DiffusionSchedule(...)\u001b[39;00m\n",
|
| 126 |
+
"\u001b[36mFile \u001b[39m\u001b[32m~/VideoAI/.venv/lib/python3.13/site-packages/diffusers/pipelines/pipeline_loading_utils.py:770\u001b[39m, in \u001b[36mload_sub_model\u001b[39m\u001b[34m(library_name, class_name, importable_classes, pipelines, is_pipeline_module, pipeline_class, torch_dtype, provider, sess_options, device_map, max_memory, offload_folder, offload_state_dict, model_variants, name, from_flax, variant, low_cpu_mem_usage, cached_folder, use_safetensors, dduf_entries, provider_options, quantization_config)\u001b[39m\n\u001b[32m 766\u001b[39m \u001b[38;5;28;01mif\u001b[39;00m is_dummy_path \u001b[38;5;129;01mand\u001b[39;00m \u001b[33m\"\u001b[39m\u001b[33mdummy\u001b[39m\u001b[33m\"\u001b[39m \u001b[38;5;129;01min\u001b[39;00m none_module:\n\u001b[32m 767\u001b[39m \u001b[38;5;66;03m# call class_obj for nice error message of missing requirements\u001b[39;00m\n\u001b[32m 768\u001b[39m class_obj()\n\u001b[32m--> \u001b[39m\u001b[32m770\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\n\u001b[32m 771\u001b[39m \u001b[33mf\u001b[39m\u001b[33m\"\u001b[39m\u001b[33mThe component \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mclass_obj\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m of \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mpipeline_class\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m cannot be loaded as it does not seem to have\u001b[39m\u001b[33m\"\u001b[39m\n\u001b[32m 772\u001b[39m \u001b[33mf\u001b[39m\u001b[33m\"\u001b[39m\u001b[33m any of the loading methods defined in \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mALL_IMPORTABLE_CLASSES\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m.\u001b[39m\u001b[33m\"\u001b[39m\n\u001b[32m 773\u001b[39m )\n\u001b[32m 775\u001b[39m load_method = _get_load_method(class_obj, load_method_name, is_dduf=dduf_entries \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m)\n\u001b[32m 777\u001b[39m \u001b[38;5;66;03m# add kwargs to loading method\u001b[39;00m\n",
|
| 127 |
+
"\u001b[31mValueError\u001b[39m: The component <class 'transformers.models.t5.tokenization_t5._LazyModule.__getattr__.<locals>.Placeholder'> of <class 'diffusers.pipelines.cogvideo.pipeline_cogvideox.CogVideoXPipeline'> cannot be loaded as it does not seem to have any of the loading methods defined in {'ModelMixin': ['save_pretrained', 'from_pretrained'], 'SchedulerMixin': ['save_pretrained', 'from_pretrained'], 'DiffusionPipeline': ['save_pretrained', 'from_pretrained'], 'OnnxRuntimeModel': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizer': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizerFast': ['save_pretrained', 'from_pretrained'], 'PreTrainedModel': ['save_pretrained', 'from_pretrained'], 'FeatureExtractionMixin': ['save_pretrained', 'from_pretrained'], 'ProcessorMixin': ['save_pretrained', 'from_pretrained'], 'ImageProcessingMixin': ['save_pretrained', 'from_pretrained'], 'ORTModule': ['save_pretrained', 'from_pretrained']}."
|
| 128 |
+
]
|
| 129 |
+
}
|
| 130 |
+
],
|
| 131 |
+
"source": [
|
| 132 |
+
"# Step 3: Load Model (CogVideoX-2B - Smaller, faster)\n",
|
| 133 |
+
"print(\"π€ Loading CogVideoX-2B model...\")\n",
|
| 134 |
+
"print(\"β³ This may take 2-3 minutes...\")\n",
|
| 135 |
+
"\n",
|
| 136 |
+
"pipe = CogVideoXPipeline.from_pretrained(\n",
|
| 137 |
+
" \"THUDM/CogVideoX-2b\",\n",
|
| 138 |
+
" torch_dtype=torch.float16\n",
|
| 139 |
+
")\n",
|
| 140 |
+
"\n",
|
| 141 |
+
"if torch.cuda.is_available():\n",
|
| 142 |
+
" pipe.to(\"cuda\")\n",
|
| 143 |
+
" print(\"β
Model loaded on GPU!\")\n",
|
| 144 |
+
"else:\n",
|
| 145 |
+
" print(\"β οΈ Running on CPU (will be slower)\")\n",
|
| 146 |
+
"\n",
|
| 147 |
+
"print(\"π¬ Ready to generate videos!\")"
|
| 148 |
+
]
|
| 149 |
+
},
|
| 150 |
+
{
|
| 151 |
+
"cell_type": "code",
|
| 152 |
+
"execution_count": null,
|
| 153 |
+
"metadata": {
|
| 154 |
+
"id": "generate_function"
|
| 155 |
+
},
|
| 156 |
+
"outputs": [],
|
| 157 |
+
"source": [
|
| 158 |
+
"# Step 4: Generate Video Function\n",
|
| 159 |
+
"def generate_video(prompt, output_path=\"output.mp4\", num_frames=49):\n",
|
| 160 |
+
" \"\"\"\n",
|
| 161 |
+
" Generate a video from a text prompt\n",
|
| 162 |
+
" \n",
|
| 163 |
+
" Args:\n",
|
| 164 |
+
" prompt: Text description of the video\n",
|
| 165 |
+
" output_path: Where to save the video\n",
|
| 166 |
+
" num_frames: Number of frames (default 49 = ~6 seconds)\n",
|
| 167 |
+
" \"\"\"\n",
|
| 168 |
+
" print(f\"π¨ Generating video for: {prompt}\")\n",
|
| 169 |
+
" print(f\"β³ This will take 30-60 seconds...\")\n",
|
| 170 |
+
" \n",
|
| 171 |
+
" # Generate\n",
|
| 172 |
+
" video_frames = pipe(\n",
|
| 173 |
+
" prompt=prompt,\n",
|
| 174 |
+
" num_frames=num_frames,\n",
|
| 175 |
+
" guidance_scale=6.0,\n",
|
| 176 |
+
" num_inference_steps=50\n",
|
| 177 |
+
" ).frames[0]\n",
|
| 178 |
+
" \n",
|
| 179 |
+
" # Save\n",
|
| 180 |
+
" export_to_video(video_frames, output_path, fps=8)\n",
|
| 181 |
+
" \n",
|
| 182 |
+
" print(f\"β
Video saved to: {output_path}\")\n",
|
| 183 |
+
" \n",
|
| 184 |
+
" # Display\n",
|
| 185 |
+
" display(Video(output_path, embed=True, width=512))\n",
|
| 186 |
+
" \n",
|
| 187 |
+
" return output_path\n",
|
| 188 |
+
"\n",
|
| 189 |
+
"print(\"β
Generation function ready!\")"
|
| 190 |
+
]
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"cell_type": "markdown",
|
| 194 |
+
"metadata": {
|
| 195 |
+
"id": "generate_header"
|
| 196 |
+
},
|
| 197 |
+
"source": [
|
| 198 |
+
"---\n",
|
| 199 |
+
"## π¬ Generate Your Video!\n",
|
| 200 |
+
"\n",
|
| 201 |
+
"**Edit the prompt below and run the cell:**"
|
| 202 |
+
]
|
| 203 |
+
},
|
| 204 |
+
{
|
| 205 |
+
"cell_type": "code",
|
| 206 |
+
"execution_count": null,
|
| 207 |
+
"metadata": {
|
| 208 |
+
"id": "generate"
|
| 209 |
+
},
|
| 210 |
+
"outputs": [],
|
| 211 |
+
"source": [
|
| 212 |
+
"# π¬ EDIT THIS PROMPT:\n",
|
| 213 |
+
"prompt = \"A golden retriever running through a field of sunflowers at sunset\"\n",
|
| 214 |
+
"\n",
|
| 215 |
+
"# Generate!\n",
|
| 216 |
+
"video_path = generate_video(prompt)\n",
|
| 217 |
+
"\n",
|
| 218 |
+
"# Download link\n",
|
| 219 |
+
"from google.colab import files\n",
|
| 220 |
+
"print(\"\\nπ₯ Click below to download your video:\")\n",
|
| 221 |
+
"files.download(video_path)"
|
| 222 |
+
]
|
| 223 |
+
},
|
| 224 |
+
{
|
| 225 |
+
"cell_type": "markdown",
|
| 226 |
+
"metadata": {
|
| 227 |
+
"id": "examples"
|
| 228 |
+
},
|
| 229 |
+
"source": [
|
| 230 |
+
"---\n",
|
| 231 |
+
"## π‘ Example Prompts\n",
|
| 232 |
+
"\n",
|
| 233 |
+
"Try these prompts:\n",
|
| 234 |
+
"\n",
|
| 235 |
+
"```python\n",
|
| 236 |
+
"# Nature\n",
|
| 237 |
+
"\"A majestic waterfall cascading down rocks in a lush rainforest\"\n",
|
| 238 |
+
"\n",
|
| 239 |
+
"# Animals\n",
|
| 240 |
+
"\"A cat playing with a ball of yarn on a cozy carpet\"\n",
|
| 241 |
+
"\n",
|
| 242 |
+
"# Urban\n",
|
| 243 |
+
"\"City street at night with cars and neon lights\"\n",
|
| 244 |
+
"\n",
|
| 245 |
+
"# Fantasy\n",
|
| 246 |
+
"\"A dragon flying over a medieval castle at dawn\"\n",
|
| 247 |
+
"\n",
|
| 248 |
+
"# Action\n",
|
| 249 |
+
"\"A sports car drifting around a corner on a race track\"\n",
|
| 250 |
+
"```\n",
|
| 251 |
+
"\n",
|
| 252 |
+
"**Just copy a prompt above and paste it in the previous cell!**"
|
| 253 |
+
]
|
| 254 |
+
},
|
| 255 |
+
{
|
| 256 |
+
"cell_type": "markdown",
|
| 257 |
+
"metadata": {
|
| 258 |
+
"id": "api_header"
|
| 259 |
+
},
|
| 260 |
+
"source": [
|
| 261 |
+
"---\n",
|
| 262 |
+
"## π Optional: Create API Server (Advanced)\n",
|
| 263 |
+
"\n",
|
| 264 |
+
"Run this to create a temporary API endpoint you can connect to from your app!"
|
| 265 |
+
]
|
| 266 |
+
},
|
| 267 |
+
{
|
| 268 |
+
"cell_type": "code",
|
| 269 |
+
"execution_count": null,
|
| 270 |
+
"metadata": {
|
| 271 |
+
"id": "api_server"
|
| 272 |
+
},
|
| 273 |
+
"outputs": [],
|
| 274 |
+
"source": [
|
| 275 |
+
"# Optional: Create Flask API + ngrok tunnel\n",
|
| 276 |
+
"from flask import Flask, request, jsonify, send_file\n",
|
| 277 |
+
"from flask_cors import CORS\n",
|
| 278 |
+
"from pyngrok import ngrok\n",
|
| 279 |
+
"import threading\n",
|
| 280 |
+
"\n",
|
| 281 |
+
"app = Flask(__name__)\n",
|
| 282 |
+
"CORS(app)\n",
|
| 283 |
+
"\n",
|
| 284 |
+
"@app.route('/generate-video', methods=['POST'])\n",
|
| 285 |
+
"def api_generate():\n",
|
| 286 |
+
" data = request.json\n",
|
| 287 |
+
" prompt = data.get('prompt', '')\n",
|
| 288 |
+
" \n",
|
| 289 |
+
" if not prompt:\n",
|
| 290 |
+
" return jsonify({'error': 'Prompt required'}), 400\n",
|
| 291 |
+
" \n",
|
| 292 |
+
" output_path = generate_video(prompt)\n",
|
| 293 |
+
" \n",
|
| 294 |
+
" return jsonify({\n",
|
| 295 |
+
" 'video_url': f'/download/{output_path}',\n",
|
| 296 |
+
" 'prompt': prompt\n",
|
| 297 |
+
" })\n",
|
| 298 |
+
"\n",
|
| 299 |
+
"@app.route('/download/<path:filename>')\n",
|
| 300 |
+
"def download(filename):\n",
|
| 301 |
+
" return send_file(filename, mimetype='video/mp4')\n",
|
| 302 |
+
"\n",
|
| 303 |
+
"# Start ngrok tunnel\n",
|
| 304 |
+
"public_url = ngrok.connect(5000)\n",
|
| 305 |
+
"print(f\"\\nπ Your API is live at: {public_url}\")\n",
|
| 306 |
+
"print(f\"\\nπ Use this URL in your frontend!\")\n",
|
| 307 |
+
"print(f\" Replace http://localhost:5000 with: {public_url}\")\n",
|
| 308 |
+
"\n",
|
| 309 |
+
"# Run Flask\n",
|
| 310 |
+
"threading.Thread(target=lambda: app.run(port=5000, debug=False)).start()"
|
| 311 |
+
]
|
| 312 |
+
}
|
| 313 |
+
],
|
| 314 |
+
"metadata": {
|
| 315 |
+
"accelerator": "GPU",
|
| 316 |
+
"colab": {
|
| 317 |
+
"gpuType": "T4",
|
| 318 |
+
"provenance": []
|
| 319 |
+
},
|
| 320 |
+
"kernelspec": {
|
| 321 |
+
"display_name": ".venv",
|
| 322 |
+
"language": "python",
|
| 323 |
+
"name": "python3"
|
| 324 |
+
},
|
| 325 |
+
"language_info": {
|
| 326 |
+
"codemirror_mode": {
|
| 327 |
+
"name": "ipython",
|
| 328 |
+
"version": 3
|
| 329 |
+
},
|
| 330 |
+
"file_extension": ".py",
|
| 331 |
+
"mimetype": "text/x-python",
|
| 332 |
+
"name": "python",
|
| 333 |
+
"nbconvert_exporter": "python",
|
| 334 |
+
"pygments_lexer": "ipython3",
|
| 335 |
+
"version": "3.13.6"
|
| 336 |
+
}
|
| 337 |
+
},
|
| 338 |
+
"nbformat": 4,
|
| 339 |
+
"nbformat_minor": 0
|
| 340 |
+
}
|
app.log
ADDED
|
@@ -0,0 +1,326 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-10-02 17:53:41,654 - __main__ - INFO - Starting Enhanced Flask server on port 5000 (debug=False)
|
| 2 |
+
2025-10-02 17:53:41,654 - __main__ - INFO - Available models: cogvideox-5b, ltx-video, stable-video-diffusion, zeroscope, animatediff
|
| 3 |
+
2025-10-02 17:53:41,654 - __main__ - INFO - Default model: zeroscope
|
| 4 |
+
2025-10-02 17:53:43,606 - werkzeug - INFO - [31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m
|
| 5 |
+
* Running on all addresses (0.0.0.0)
|
| 6 |
+
* Running on http://127.0.0.1:5000
|
| 7 |
+
* Running on http://192.168.4.190:5000
|
| 8 |
+
2025-10-02 17:53:43,606 - werkzeug - INFO - [33mPress CTRL+C to quit[0m
|
| 9 |
+
2025-10-02 17:54:09,795 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:54:09] "GET /health HTTP/1.1" 200 -
|
| 10 |
+
2025-10-02 17:54:18,642 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:54:18] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 11 |
+
2025-10-02 17:54:18,643 - __main__ - INFO - Generating video with zeroscope
|
| 12 |
+
2025-10-02 17:54:18,643 - __main__ - INFO - Base prompt: Ocean waves crashing on a beach at sunset...
|
| 13 |
+
2025-10-02 17:54:18,643 - __main__ - INFO - Enhanced prompt: Ocean waves crashing on a beach at sunset...
|
| 14 |
+
2025-10-02 17:54:18,643 - __main__ - INFO - Initializing client for zeroscope: https://cerspense-zeroscope-v2-xl.hf.space/
|
| 15 |
+
2025-10-02 17:54:19,329 - __main__ - ERROR - Failed to initialize client for zeroscope: Could not fetch config for https://cerspense-zeroscope-v2-xl.hf.space/
|
| 16 |
+
2025-10-02 17:54:19,330 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:54:19] "[35m[1mPOST /generate-video HTTP/1.1[0m" 503 -
|
| 17 |
+
2025-10-02 17:55:14,687 - __main__ - INFO - Starting Enhanced Flask server on port 5000 (debug=False)
|
| 18 |
+
2025-10-02 17:55:14,688 - __main__ - INFO - Available models: cogvideox-5b, ltx-video, stable-video-diffusion, zeroscope, animatediff
|
| 19 |
+
2025-10-02 17:55:14,688 - __main__ - INFO - Default model: zeroscope
|
| 20 |
+
2025-10-02 17:55:14,692 - werkzeug - INFO - [31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m
|
| 21 |
+
* Running on all addresses (0.0.0.0)
|
| 22 |
+
* Running on http://127.0.0.1:5000
|
| 23 |
+
* Running on http://192.168.4.190:5000
|
| 24 |
+
2025-10-02 17:55:14,692 - werkzeug - INFO - [33mPress CTRL+C to quit[0m
|
| 25 |
+
2025-10-02 17:56:54,359 - __main__ - INFO - Starting Enhanced Flask server on port 5000 (debug=False)
|
| 26 |
+
2025-10-02 17:56:54,359 - __main__ - INFO - Available models: cogvideox-5b, ltx-video, stable-video-diffusion, zeroscope, animatediff
|
| 27 |
+
2025-10-02 17:56:54,359 - __main__ - INFO - Default model: zeroscope
|
| 28 |
+
2025-10-02 17:56:54,364 - werkzeug - INFO - [31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m
|
| 29 |
+
* Running on all addresses (0.0.0.0)
|
| 30 |
+
* Running on http://127.0.0.1:5000
|
| 31 |
+
* Running on http://192.168.4.190:5000
|
| 32 |
+
2025-10-02 17:56:54,364 - werkzeug - INFO - [33mPress CTRL+C to quit[0m
|
| 33 |
+
2025-10-02 17:57:07,937 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:57:07] "GET /health HTTP/1.1" 200 -
|
| 34 |
+
2025-10-02 17:57:12,851 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:57:12] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 35 |
+
2025-10-02 17:57:12,852 - __main__ - INFO - Generating video with zeroscope
|
| 36 |
+
2025-10-02 17:57:12,853 - __main__ - INFO - Base prompt: Ocean waves crashing on a beach at sunset...
|
| 37 |
+
2025-10-02 17:57:12,853 - __main__ - INFO - Enhanced prompt: Ocean waves crashing on a beach at sunset...
|
| 38 |
+
2025-10-02 17:57:12,853 - __main__ - INFO - Initializing client for zeroscope: cerspense/zeroscope_v2_XL
|
| 39 |
+
2025-10-02 17:57:13,046 - __main__ - ERROR - Failed to initialize client for zeroscope: 401 Client Error. (Request ID: Root=1-68df1f69-3bd50d43109a1f7137c8e78d;afa9698e-cf83-48e0-b777-22b733b9764a)
|
| 40 |
+
|
| 41 |
+
Repository Not Found for url: https://huggingface.co/api/spaces/cerspense/zeroscope_v2_XL.
|
| 42 |
+
Please make sure you specified the correct `repo_id` and `repo_type`.
|
| 43 |
+
If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
|
| 44 |
+
Invalid username or password.
|
| 45 |
+
2025-10-02 17:57:13,047 - __main__ - ERROR - This might be because:
|
| 46 |
+
2025-10-02 17:57:13,047 - __main__ - ERROR - 1. The Hugging Face Space is not available or sleeping
|
| 47 |
+
2025-10-02 17:57:13,047 - __main__ - ERROR - 2. The Space URL has changed
|
| 48 |
+
2025-10-02 17:57:13,047 - __main__ - ERROR - 3. The Space requires authentication
|
| 49 |
+
2025-10-02 17:57:13,047 - __main__ - ERROR - 4. Network connectivity issues
|
| 50 |
+
2025-10-02 17:57:13,047 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:57:13] "[35m[1mPOST /generate-video HTTP/1.1[0m" 503 -
|
| 51 |
+
2025-10-02 17:57:16,639 - __main__ - INFO - Generating video with zeroscope
|
| 52 |
+
2025-10-02 17:57:16,639 - __main__ - INFO - Base prompt: City street with cars at night...
|
| 53 |
+
2025-10-02 17:57:16,639 - __main__ - INFO - Enhanced prompt: City street with cars at night...
|
| 54 |
+
2025-10-02 17:57:16,639 - __main__ - INFO - Initializing client for zeroscope: cerspense/zeroscope_v2_XL
|
| 55 |
+
2025-10-02 17:57:16,737 - __main__ - ERROR - Failed to initialize client for zeroscope: 401 Client Error. (Request ID: Root=1-68df1f6c-70bfc27d0c291ca3578a98f2;bbdb219a-c7c5-4b21-82d1-d1784f274ffc)
|
| 56 |
+
|
| 57 |
+
Repository Not Found for url: https://huggingface.co/api/spaces/cerspense/zeroscope_v2_XL.
|
| 58 |
+
Please make sure you specified the correct `repo_id` and `repo_type`.
|
| 59 |
+
If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
|
| 60 |
+
Invalid username or password.
|
| 61 |
+
2025-10-02 17:57:16,737 - __main__ - ERROR - This might be because:
|
| 62 |
+
2025-10-02 17:57:16,737 - __main__ - ERROR - 1. The Hugging Face Space is not available or sleeping
|
| 63 |
+
2025-10-02 17:57:16,737 - __main__ - ERROR - 2. The Space URL has changed
|
| 64 |
+
2025-10-02 17:57:16,737 - __main__ - ERROR - 3. The Space requires authentication
|
| 65 |
+
2025-10-02 17:57:16,737 - __main__ - ERROR - 4. Network connectivity issues
|
| 66 |
+
2025-10-02 17:57:16,738 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:57:16] "[35m[1mPOST /generate-video HTTP/1.1[0m" 503 -
|
| 67 |
+
2025-10-02 17:57:19,218 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:57:19] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 68 |
+
2025-10-02 17:57:19,220 - __main__ - INFO - Generating video with zeroscope
|
| 69 |
+
2025-10-02 17:57:19,220 - __main__ - INFO - Base prompt: A bird flying through clouds...
|
| 70 |
+
2025-10-02 17:57:19,220 - __main__ - INFO - Enhanced prompt: A bird flying through clouds...
|
| 71 |
+
2025-10-02 17:57:19,220 - __main__ - INFO - Initializing client for zeroscope: cerspense/zeroscope_v2_XL
|
| 72 |
+
2025-10-02 17:57:19,319 - __main__ - ERROR - Failed to initialize client for zeroscope: 401 Client Error. (Request ID: Root=1-68df1f6f-0a6d6cdb58a979b61e8d69ae;362721b8-5562-4ab4-be2a-367a03fd3d10)
|
| 73 |
+
|
| 74 |
+
Repository Not Found for url: https://huggingface.co/api/spaces/cerspense/zeroscope_v2_XL.
|
| 75 |
+
Please make sure you specified the correct `repo_id` and `repo_type`.
|
| 76 |
+
If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
|
| 77 |
+
Invalid username or password.
|
| 78 |
+
2025-10-02 17:57:19,319 - __main__ - ERROR - This might be because:
|
| 79 |
+
2025-10-02 17:57:19,319 - __main__ - ERROR - 1. The Hugging Face Space is not available or sleeping
|
| 80 |
+
2025-10-02 17:57:19,319 - __main__ - ERROR - 2. The Space URL has changed
|
| 81 |
+
2025-10-02 17:57:19,319 - __main__ - ERROR - 3. The Space requires authentication
|
| 82 |
+
2025-10-02 17:57:19,319 - __main__ - ERROR - 4. Network connectivity issues
|
| 83 |
+
2025-10-02 17:57:19,319 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:57:19] "[35m[1mPOST /generate-video HTTP/1.1[0m" 503 -
|
| 84 |
+
2025-10-02 17:58:07,524 - __main__ - INFO - Starting Enhanced Flask server on port 5000 (debug=False)
|
| 85 |
+
2025-10-02 17:58:07,524 - __main__ - INFO - Available models: cogvideox-5b, ltx-video, stable-video-diffusion, zeroscope, demo, animatediff
|
| 86 |
+
2025-10-02 17:58:07,524 - __main__ - INFO - Default model: zeroscope
|
| 87 |
+
2025-10-02 17:58:07,530 - werkzeug - INFO - [31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m
|
| 88 |
+
* Running on all addresses (0.0.0.0)
|
| 89 |
+
* Running on http://127.0.0.1:5000
|
| 90 |
+
* Running on http://192.168.4.190:5000
|
| 91 |
+
2025-10-02 17:58:07,530 - werkzeug - INFO - [33mPress CTRL+C to quit[0m
|
| 92 |
+
2025-10-02 17:59:05,672 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:59:05] "GET /health HTTP/1.1" 200 -
|
| 93 |
+
2025-10-02 17:59:06,434 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:59:06] "GET /health HTTP/1.1" 200 -
|
| 94 |
+
2025-10-02 17:59:06,720 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:59:06] "GET /health HTTP/1.1" 200 -
|
| 95 |
+
2025-10-02 17:59:06,873 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:59:06] "GET /health HTTP/1.1" 200 -
|
| 96 |
+
2025-10-02 17:59:07,034 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:59:07] "GET /health HTTP/1.1" 200 -
|
| 97 |
+
2025-10-02 17:59:07,216 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:59:07] "GET /health HTTP/1.1" 200 -
|
| 98 |
+
2025-10-02 17:59:12,392 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:59:12] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 99 |
+
2025-10-02 17:59:12,394 - __main__ - INFO - Generating video with zeroscope
|
| 100 |
+
2025-10-02 17:59:12,394 - __main__ - INFO - Base prompt: A bird flying through clouds...
|
| 101 |
+
2025-10-02 17:59:12,394 - __main__ - INFO - Enhanced prompt: A bird flying through clouds...
|
| 102 |
+
2025-10-02 17:59:12,394 - __main__ - INFO - Initializing client for zeroscope: cerspense/zeroscope_v2_XL
|
| 103 |
+
2025-10-02 17:59:12,576 - __main__ - ERROR - Failed to initialize client for zeroscope: 401 Client Error. (Request ID: Root=1-68df1fe0-4f816ee535fd2a9923f62d4d;d7ddf5c0-8093-41a3-8733-94ec61e5d72f)
|
| 104 |
+
|
| 105 |
+
Repository Not Found for url: https://huggingface.co/api/spaces/cerspense/zeroscope_v2_XL.
|
| 106 |
+
Please make sure you specified the correct `repo_id` and `repo_type`.
|
| 107 |
+
If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
|
| 108 |
+
Invalid username or password.
|
| 109 |
+
2025-10-02 17:59:12,576 - __main__ - ERROR - This might be because:
|
| 110 |
+
2025-10-02 17:59:12,576 - __main__ - ERROR - 1. The Hugging Face Space is not available or sleeping
|
| 111 |
+
2025-10-02 17:59:12,576 - __main__ - ERROR - 2. The Space URL has changed
|
| 112 |
+
2025-10-02 17:59:12,576 - __main__ - ERROR - 3. The Space requires authentication
|
| 113 |
+
2025-10-02 17:59:12,576 - __main__ - ERROR - 4. Network connectivity issues
|
| 114 |
+
2025-10-02 17:59:12,577 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:59:12] "[35m[1mPOST /generate-video HTTP/1.1[0m" 503 -
|
| 115 |
+
2025-10-02 17:59:22,467 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:59:22] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 116 |
+
2025-10-02 17:59:22,469 - __main__ - INFO - Generating video with zeroscope
|
| 117 |
+
2025-10-02 17:59:22,469 - __main__ - INFO - Base prompt: Ocean waves crashing on a beach at sunset...
|
| 118 |
+
2025-10-02 17:59:22,469 - __main__ - INFO - Enhanced prompt: Ocean waves crashing on a beach at sunset...
|
| 119 |
+
2025-10-02 17:59:22,469 - __main__ - INFO - Initializing client for zeroscope: cerspense/zeroscope_v2_XL
|
| 120 |
+
2025-10-02 17:59:22,558 - __main__ - ERROR - Failed to initialize client for zeroscope: 401 Client Error. (Request ID: Root=1-68df1fea-6ffeda02318801ef2d739ba4;ae684b4c-cc89-4681-88c3-8091a04c818b)
|
| 121 |
+
|
| 122 |
+
Repository Not Found for url: https://huggingface.co/api/spaces/cerspense/zeroscope_v2_XL.
|
| 123 |
+
Please make sure you specified the correct `repo_id` and `repo_type`.
|
| 124 |
+
If you are trying to access a private or gated repo, make sure you are authenticated. For more details, see https://huggingface.co/docs/huggingface_hub/authentication
|
| 125 |
+
Invalid username or password.
|
| 126 |
+
2025-10-02 17:59:22,559 - __main__ - ERROR - This might be because:
|
| 127 |
+
2025-10-02 17:59:22,559 - __main__ - ERROR - 1. The Hugging Face Space is not available or sleeping
|
| 128 |
+
2025-10-02 17:59:22,559 - __main__ - ERROR - 2. The Space URL has changed
|
| 129 |
+
2025-10-02 17:59:22,559 - __main__ - ERROR - 3. The Space requires authentication
|
| 130 |
+
2025-10-02 17:59:22,559 - __main__ - ERROR - 4. Network connectivity issues
|
| 131 |
+
2025-10-02 17:59:22,559 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 17:59:22] "[35m[1mPOST /generate-video HTTP/1.1[0m" 503 -
|
| 132 |
+
2025-10-02 18:02:37,798 - __main__ - INFO - Starting Enhanced Flask server on port 5000 (debug=False)
|
| 133 |
+
2025-10-02 18:02:37,798 - __main__ - INFO - Available models: cogvideox-5b, ltx-video, stable-video-diffusion, zeroscope, demo, animatediff
|
| 134 |
+
2025-10-02 18:02:37,798 - __main__ - INFO - Default model: zeroscope
|
| 135 |
+
2025-10-02 18:02:37,803 - werkzeug - INFO - [31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m
|
| 136 |
+
* Running on all addresses (0.0.0.0)
|
| 137 |
+
* Running on http://127.0.0.1:5000
|
| 138 |
+
* Running on http://192.168.4.190:5000
|
| 139 |
+
2025-10-02 18:02:37,803 - werkzeug - INFO - [33mPress CTRL+C to quit[0m
|
| 140 |
+
2025-10-02 18:03:16,181 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:03:16] "GET /health HTTP/1.1" 200 -
|
| 141 |
+
2025-10-02 18:03:22,804 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:03:22] "OPTIONS /test-video HTTP/1.1" 200 -
|
| 142 |
+
2025-10-02 18:03:22,806 - __main__ - INFO - Test mode: Simulating video generation for: photorealistic, 4k, high detail, A golden retriever running through a field of flowers [Tracking sho
|
| 143 |
+
2025-10-02 18:03:22,807 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:03:22] "POST /test-video HTTP/1.1" 200 -
|
| 144 |
+
2025-10-02 18:03:57,727 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:03:57] "OPTIONS /test-video HTTP/1.1" 200 -
|
| 145 |
+
2025-10-02 18:03:57,730 - __main__ - INFO - Test mode: Simulating video generation for: cinematic, movie scene, A golden retriever running through a field of flowers [Dolly in], fog, misty
|
| 146 |
+
2025-10-02 18:03:57,730 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:03:57] "POST /test-video HTTP/1.1" 200 -
|
| 147 |
+
2025-10-02 18:06:45,377 - __main__ - INFO - Starting Enhanced Flask server on port 5000 (debug=False)
|
| 148 |
+
2025-10-02 18:06:45,377 - __main__ - INFO - Available models: cogvideox-5b, cogvideox-2b, hunyuan-video, stable-video-diffusion, demo, animatediff
|
| 149 |
+
2025-10-02 18:06:45,377 - __main__ - INFO - Default model: zeroscope
|
| 150 |
+
2025-10-02 18:06:45,382 - werkzeug - INFO - [31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m
|
| 151 |
+
* Running on all addresses (0.0.0.0)
|
| 152 |
+
* Running on http://127.0.0.1:5000
|
| 153 |
+
* Running on http://192.168.4.190:5000
|
| 154 |
+
2025-10-02 18:06:45,382 - werkzeug - INFO - [33mPress CTRL+C to quit[0m
|
| 155 |
+
2025-10-02 18:07:13,526 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:07:13] "GET /health HTTP/1.1" 200 -
|
| 156 |
+
2025-10-02 18:07:14,400 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:07:14] "GET /health HTTP/1.1" 200 -
|
| 157 |
+
2025-10-02 18:07:14,899 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:07:14] "GET /health HTTP/1.1" 200 -
|
| 158 |
+
2025-10-02 18:07:15,075 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:07:15] "GET /health HTTP/1.1" 200 -
|
| 159 |
+
2025-10-02 18:07:15,279 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:07:15] "GET /health HTTP/1.1" 200 -
|
| 160 |
+
2025-10-02 18:07:15,469 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:07:15] "GET /health HTTP/1.1" 200 -
|
| 161 |
+
2025-10-02 18:07:15,624 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:07:15] "GET /health HTTP/1.1" 200 -
|
| 162 |
+
2025-10-02 18:07:44,158 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:07:44] "GET /health HTTP/1.1" 200 -
|
| 163 |
+
2025-10-02 18:07:44,783 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:07:44] "GET /health HTTP/1.1" 200 -
|
| 164 |
+
2025-10-02 18:07:44,933 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:07:44] "GET /health HTTP/1.1" 200 -
|
| 165 |
+
2025-10-02 18:07:45,110 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:07:45] "GET /health HTTP/1.1" 200 -
|
| 166 |
+
2025-10-02 18:07:47,306 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:07:47] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 167 |
+
2025-10-02 18:07:47,307 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:07:47] "[31m[1mPOST /generate-video HTTP/1.1[0m" 400 -
|
| 168 |
+
2025-10-02 18:08:24,475 - __main__ - INFO - Starting Enhanced Flask server on port 5000 (debug=False)
|
| 169 |
+
2025-10-02 18:08:24,475 - __main__ - INFO - Available models: cogvideox-5b, cogvideox-2b, hunyuan-video, stable-video-diffusion, demo
|
| 170 |
+
2025-10-02 18:08:24,475 - __main__ - INFO - Default model: cogvideox-5b
|
| 171 |
+
2025-10-02 18:08:24,480 - werkzeug - INFO - [31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m
|
| 172 |
+
* Running on all addresses (0.0.0.0)
|
| 173 |
+
* Running on http://127.0.0.1:5000
|
| 174 |
+
* Running on http://192.168.4.190:5000
|
| 175 |
+
2025-10-02 18:08:24,480 - werkzeug - INFO - [33mPress CTRL+C to quit[0m
|
| 176 |
+
2025-10-02 18:09:21,217 - __main__ - INFO - Starting Enhanced Flask server on port 5000 (debug=False)
|
| 177 |
+
2025-10-02 18:09:21,217 - __main__ - INFO - Available models: cogvideox-5b, cogvideox-2b, hunyuan-video, stable-video-diffusion, demo
|
| 178 |
+
2025-10-02 18:09:21,217 - __main__ - INFO - Default model: cogvideox-5b
|
| 179 |
+
2025-10-02 18:09:21,223 - werkzeug - INFO - [31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m
|
| 180 |
+
* Running on all addresses (0.0.0.0)
|
| 181 |
+
* Running on http://127.0.0.1:5000
|
| 182 |
+
* Running on http://192.168.4.190:5000
|
| 183 |
+
2025-10-02 18:09:21,223 - werkzeug - INFO - [33mPress CTRL+C to quit[0m
|
| 184 |
+
2025-10-02 18:09:37,237 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:09:37] "GET /health HTTP/1.1" 200 -
|
| 185 |
+
2025-10-02 18:09:37,862 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:09:37] "GET /health HTTP/1.1" 200 -
|
| 186 |
+
2025-10-02 18:09:38,020 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:09:38] "GET /health HTTP/1.1" 200 -
|
| 187 |
+
2025-10-02 18:09:38,205 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:09:38] "GET /health HTTP/1.1" 200 -
|
| 188 |
+
2025-10-02 18:09:38,378 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:09:38] "GET /health HTTP/1.1" 200 -
|
| 189 |
+
2025-10-02 18:09:42,105 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:09:42] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 190 |
+
2025-10-02 18:09:42,106 - __main__ - ERROR - Unexpected error in generate_video: 'zeroscope'
|
| 191 |
+
Traceback (most recent call last):
|
| 192 |
+
File "/Users/sravyalu/VideoAI/hailuo-clone/backend_enhanced.py", line 172, in generate_video
|
| 193 |
+
model_info = get_model_info(model_id)
|
| 194 |
+
File "/Users/sravyalu/VideoAI/hailuo-clone/models_config.py", line 152, in get_model_info
|
| 195 |
+
return VIDEO_MODELS.get(model_id, VIDEO_MODELS["zeroscope"])
|
| 196 |
+
~~~~~~~~~~~~^^^^^^^^^^^^^
|
| 197 |
+
KeyError: 'zeroscope'
|
| 198 |
+
2025-10-02 18:09:42,109 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:09:42] "[35m[1mPOST /generate-video HTTP/1.1[0m" 500 -
|
| 199 |
+
2025-10-02 18:09:46,270 - __main__ - ERROR - Unexpected error in generate_video: 'zeroscope'
|
| 200 |
+
Traceback (most recent call last):
|
| 201 |
+
File "/Users/sravyalu/VideoAI/hailuo-clone/backend_enhanced.py", line 172, in generate_video
|
| 202 |
+
model_info = get_model_info(model_id)
|
| 203 |
+
File "/Users/sravyalu/VideoAI/hailuo-clone/models_config.py", line 152, in get_model_info
|
| 204 |
+
return VIDEO_MODELS.get(model_id, VIDEO_MODELS["zeroscope"])
|
| 205 |
+
~~~~~~~~~~~~^^^^^^^^^^^^^
|
| 206 |
+
KeyError: 'zeroscope'
|
| 207 |
+
2025-10-02 18:09:46,272 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:09:46] "[35m[1mPOST /generate-video HTTP/1.1[0m" 500 -
|
| 208 |
+
2025-10-02 18:09:47,092 - __main__ - ERROR - Unexpected error in generate_video: 'zeroscope'
|
| 209 |
+
Traceback (most recent call last):
|
| 210 |
+
File "/Users/sravyalu/VideoAI/hailuo-clone/backend_enhanced.py", line 172, in generate_video
|
| 211 |
+
model_info = get_model_info(model_id)
|
| 212 |
+
File "/Users/sravyalu/VideoAI/hailuo-clone/models_config.py", line 152, in get_model_info
|
| 213 |
+
return VIDEO_MODELS.get(model_id, VIDEO_MODELS["zeroscope"])
|
| 214 |
+
~~~~~~~~~~~~^^^^^^^^^^^^^
|
| 215 |
+
KeyError: 'zeroscope'
|
| 216 |
+
2025-10-02 18:09:47,093 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:09:47] "[35m[1mPOST /generate-video HTTP/1.1[0m" 500 -
|
| 217 |
+
2025-10-02 18:09:47,841 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:09:47] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 218 |
+
2025-10-02 18:09:47,843 - __main__ - ERROR - Unexpected error in generate_video: 'zeroscope'
|
| 219 |
+
Traceback (most recent call last):
|
| 220 |
+
File "/Users/sravyalu/VideoAI/hailuo-clone/backend_enhanced.py", line 172, in generate_video
|
| 221 |
+
model_info = get_model_info(model_id)
|
| 222 |
+
File "/Users/sravyalu/VideoAI/hailuo-clone/models_config.py", line 152, in get_model_info
|
| 223 |
+
return VIDEO_MODELS.get(model_id, VIDEO_MODELS["zeroscope"])
|
| 224 |
+
~~~~~~~~~~~~^^^^^^^^^^^^^
|
| 225 |
+
KeyError: 'zeroscope'
|
| 226 |
+
2025-10-02 18:09:47,843 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:09:47] "[35m[1mPOST /generate-video HTTP/1.1[0m" 500 -
|
| 227 |
+
2025-10-02 18:09:50,014 - __main__ - ERROR - Unexpected error in generate_video: 'zeroscope'
|
| 228 |
+
Traceback (most recent call last):
|
| 229 |
+
File "/Users/sravyalu/VideoAI/hailuo-clone/backend_enhanced.py", line 172, in generate_video
|
| 230 |
+
model_info = get_model_info(model_id)
|
| 231 |
+
File "/Users/sravyalu/VideoAI/hailuo-clone/models_config.py", line 152, in get_model_info
|
| 232 |
+
return VIDEO_MODELS.get(model_id, VIDEO_MODELS["zeroscope"])
|
| 233 |
+
~~~~~~~~~~~~^^^^^^^^^^^^^
|
| 234 |
+
KeyError: 'zeroscope'
|
| 235 |
+
2025-10-02 18:09:50,015 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:09:50] "[35m[1mPOST /generate-video HTTP/1.1[0m" 500 -
|
| 236 |
+
2025-10-02 18:10:38,401 - __main__ - INFO - Starting Enhanced Flask server on port 5000 (debug=False)
|
| 237 |
+
2025-10-02 18:10:38,401 - __main__ - INFO - Available models: cogvideox-5b, cogvideox-2b, hunyuan-video, stable-video-diffusion, demo
|
| 238 |
+
2025-10-02 18:10:38,403 - __main__ - INFO - Default model: cogvideox-5b
|
| 239 |
+
2025-10-02 18:10:38,408 - werkzeug - INFO - [31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m
|
| 240 |
+
* Running on all addresses (0.0.0.0)
|
| 241 |
+
* Running on http://127.0.0.1:5000
|
| 242 |
+
* Running on http://192.168.4.190:5000
|
| 243 |
+
2025-10-02 18:10:38,408 - werkzeug - INFO - [33mPress CTRL+C to quit[0m
|
| 244 |
+
2025-10-02 18:11:03,775 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:11:03] "GET /health HTTP/1.1" 200 -
|
| 245 |
+
2025-10-02 18:11:04,402 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:11:04] "GET /health HTTP/1.1" 200 -
|
| 246 |
+
2025-10-02 18:11:04,603 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:11:04] "GET /health HTTP/1.1" 200 -
|
| 247 |
+
2025-10-02 18:11:04,786 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:11:04] "GET /health HTTP/1.1" 200 -
|
| 248 |
+
2025-10-02 18:11:04,947 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:11:04] "GET /health HTTP/1.1" 200 -
|
| 249 |
+
2025-10-02 18:12:23,373 - werkzeug - INFO - 192.168.4.190 - - [02/Oct/2025 18:12:23] "[33mGET / HTTP/1.1[0m" 404 -
|
| 250 |
+
2025-10-02 18:12:23,397 - werkzeug - INFO - 192.168.4.190 - - [02/Oct/2025 18:12:23] "[33mGET /favicon.ico HTTP/1.1[0m" 404 -
|
| 251 |
+
2025-10-02 18:12:33,931 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:12:33] "[33mGET / HTTP/1.1[0m" 404 -
|
| 252 |
+
2025-10-02 18:12:33,953 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:12:33] "[33mGET /favicon.ico HTTP/1.1[0m" 404 -
|
| 253 |
+
2025-10-02 18:13:04,841 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:13:04] "GET /models HTTP/1.1" 200 -
|
| 254 |
+
2025-10-02 18:13:04,847 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:13:04] "GET /health HTTP/1.1" 200 -
|
| 255 |
+
2025-10-02 18:13:34,569 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:13:34] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 256 |
+
2025-10-02 18:13:34,572 - __main__ - INFO - Generating video with cogvideox-5b
|
| 257 |
+
2025-10-02 18:13:34,572 - __main__ - INFO - Base prompt: A sports car drifting around a corner on a race track...
|
| 258 |
+
2025-10-02 18:13:34,572 - __main__ - INFO - Enhanced prompt: A sports car drifting around a corner on a race track...
|
| 259 |
+
2025-10-02 18:13:34,572 - __main__ - INFO - Initializing client for cogvideox-5b: zai-org/CogVideoX-5B-Space
|
| 260 |
+
2025-10-02 18:13:35,962 - __main__ - ERROR - Failed to initialize client for cogvideox-5b: Expecting value: line 1 column 1 (char 0)
|
| 261 |
+
2025-10-02 18:13:35,962 - __main__ - ERROR - This might be because:
|
| 262 |
+
2025-10-02 18:13:35,963 - __main__ - ERROR - 1. The Hugging Face Space is not available or sleeping
|
| 263 |
+
2025-10-02 18:13:35,963 - __main__ - ERROR - 2. The Space URL has changed
|
| 264 |
+
2025-10-02 18:13:35,963 - __main__ - ERROR - 3. The Space requires authentication
|
| 265 |
+
2025-10-02 18:13:35,963 - __main__ - ERROR - 4. Network connectivity issues
|
| 266 |
+
2025-10-02 18:13:35,965 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:13:35] "[35m[1mPOST /generate-video HTTP/1.1[0m" 503 -
|
| 267 |
+
2025-10-02 18:43:58,867 - __main__ - INFO - Starting Enhanced Flask server on port 5000 (debug=False)
|
| 268 |
+
2025-10-02 18:43:58,868 - __main__ - INFO - Available models: cogvideox-5b, cogvideox-2b, hunyuan-video, stable-video-diffusion, demo
|
| 269 |
+
2025-10-02 18:43:58,868 - __main__ - INFO - Default model: cogvideox-5b
|
| 270 |
+
2025-10-02 18:43:58,885 - werkzeug - INFO - [31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m
|
| 271 |
+
* Running on all addresses (0.0.0.0)
|
| 272 |
+
* Running on http://127.0.0.1:5000
|
| 273 |
+
* Running on http://192.168.4.190:5000
|
| 274 |
+
2025-10-02 18:43:58,885 - werkzeug - INFO - [33mPress CTRL+C to quit[0m
|
| 275 |
+
2025-10-02 18:44:14,573 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:44:14] "GET /models HTTP/1.1" 200 -
|
| 276 |
+
2025-10-02 18:44:14,578 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:44:14] "GET /health HTTP/1.1" 200 -
|
| 277 |
+
2025-10-02 18:44:28,544 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:44:28] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 278 |
+
2025-10-02 18:44:28,546 - __main__ - INFO - Generating video with cogvideox-5b
|
| 279 |
+
2025-10-02 18:44:28,546 - __main__ - INFO - Base prompt: A sports car drifting around a corner on a race track...
|
| 280 |
+
2025-10-02 18:44:28,546 - __main__ - INFO - Enhanced prompt: A sports car drifting around a corner on a race track...
|
| 281 |
+
2025-10-02 18:44:28,546 - __main__ - INFO - Initializing client for cogvideox-5b: zai-org/CogVideoX-5B-Space
|
| 282 |
+
2025-10-02 18:44:29,510 - httpx - INFO - HTTP Request: GET https://zai-org-cogvideox-5b-space.hf.space/config "HTTP/1.1 200 OK"
|
| 283 |
+
2025-10-02 18:44:29,940 - httpx - INFO - HTTP Request: GET https://zai-org-cogvideox-5b-space.hf.space/gradio_api/info?serialize=False "HTTP/1.1 200 OK"
|
| 284 |
+
2025-10-02 18:44:30,013 - __main__ - INFO - Successfully connected to cogvideox-5b
|
| 285 |
+
2025-10-02 18:44:30,013 - __main__ - INFO - Calling CogVideoX with prompt: A sports car drifting around a corner on a race track
|
| 286 |
+
2025-10-02 18:44:30,014 - __main__ - ERROR - Model API call failed: No value provided for required argument: image_input
|
| 287 |
+
2025-10-02 18:44:30,014 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:44:30] "[35m[1mPOST /generate-video HTTP/1.1[0m" 500 -
|
| 288 |
+
2025-10-02 18:44:30,375 - httpx - INFO - HTTP Request: GET https://zai-org-cogvideox-5b-space.hf.space/gradio_api/heartbeat/ce2a480d-3b6d-40ba-b8c9-80accf29c38e "HTTP/1.1 200 OK"
|
| 289 |
+
2025-10-02 18:44:45,785 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:44:45] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 290 |
+
2025-10-02 18:44:45,787 - __main__ - INFO - Generating video with cogvideox-5b
|
| 291 |
+
2025-10-02 18:44:45,787 - __main__ - INFO - Base prompt: A sports car drifting around a corner on a race track...
|
| 292 |
+
2025-10-02 18:44:45,787 - __main__ - INFO - Enhanced prompt: cartoon style, animated, A sports car drifting around a corner on a race track, dramatic lighting, high contrast...
|
| 293 |
+
2025-10-02 18:44:45,787 - __main__ - INFO - Calling CogVideoX with prompt: cartoon style, animated, A sports car drifting around a corner on a race track, dramatic lighting, h
|
| 294 |
+
2025-10-02 18:44:45,788 - __main__ - ERROR - Model API call failed: No value provided for required argument: image_input
|
| 295 |
+
2025-10-02 18:44:45,788 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:44:45] "[35m[1mPOST /generate-video HTTP/1.1[0m" 500 -
|
| 296 |
+
2025-10-02 18:44:46,871 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:44:46] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 297 |
+
2025-10-02 18:44:46,872 - __main__ - INFO - Generating video with cogvideox-5b
|
| 298 |
+
2025-10-02 18:44:46,872 - __main__ - INFO - Base prompt: A sports car drifting around a corner on a race track...
|
| 299 |
+
2025-10-02 18:44:46,872 - __main__ - INFO - Enhanced prompt: cartoon style, animated, A sports car drifting around a corner on a race track, dramatic lighting, high contrast...
|
| 300 |
+
2025-10-02 18:44:46,872 - __main__ - INFO - Calling CogVideoX with prompt: cartoon style, animated, A sports car drifting around a corner on a race track, dramatic lighting, h
|
| 301 |
+
2025-10-02 18:44:46,872 - __main__ - ERROR - Model API call failed: No value provided for required argument: image_input
|
| 302 |
+
2025-10-02 18:44:46,873 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:44:46] "[35m[1mPOST /generate-video HTTP/1.1[0m" 500 -
|
| 303 |
+
2025-10-02 18:44:47,663 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:44:47] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 304 |
+
2025-10-02 18:44:47,665 - __main__ - INFO - Generating video with cogvideox-5b
|
| 305 |
+
2025-10-02 18:44:47,665 - __main__ - INFO - Base prompt: A sports car drifting around a corner on a race track...
|
| 306 |
+
2025-10-02 18:44:47,665 - __main__ - INFO - Enhanced prompt: cartoon style, animated, A sports car drifting around a corner on a race track, dramatic lighting, high contrast...
|
| 307 |
+
2025-10-02 18:44:47,665 - __main__ - INFO - Calling CogVideoX with prompt: cartoon style, animated, A sports car drifting around a corner on a race track, dramatic lighting, h
|
| 308 |
+
2025-10-02 18:44:47,665 - __main__ - ERROR - Model API call failed: No value provided for required argument: image_input
|
| 309 |
+
2025-10-02 18:44:47,665 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:44:47] "[35m[1mPOST /generate-video HTTP/1.1[0m" 500 -
|
| 310 |
+
2025-10-02 18:44:52,935 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:44:52] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 311 |
+
2025-10-02 18:44:52,937 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:44:52] "[31m[1mPOST /generate-video HTTP/1.1[0m" 400 -
|
| 312 |
+
2025-10-02 18:45:00,842 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:45:00] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 313 |
+
2025-10-02 18:45:00,843 - __main__ - INFO - Generating video with cogvideox-5b
|
| 314 |
+
2025-10-02 18:45:00,843 - __main__ - INFO - Base prompt: A sports car drifting around a corner on a race track...
|
| 315 |
+
2025-10-02 18:45:00,843 - __main__ - INFO - Enhanced prompt: cartoon style, animated, A sports car drifting around a corner on a race track, dramatic lighting, high contrast...
|
| 316 |
+
2025-10-02 18:45:00,843 - __main__ - INFO - Calling CogVideoX with prompt: cartoon style, animated, A sports car drifting around a corner on a race track, dramatic lighting, h
|
| 317 |
+
2025-10-02 18:45:00,844 - __main__ - ERROR - Model API call failed: No value provided for required argument: image_input
|
| 318 |
+
2025-10-02 18:45:00,844 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:45:00] "[35m[1mPOST /generate-video HTTP/1.1[0m" 500 -
|
| 319 |
+
2025-10-02 18:46:50,523 - __main__ - INFO - Starting Enhanced Flask server on port 5000 (debug=False)
|
| 320 |
+
2025-10-02 18:46:50,523 - __main__ - INFO - Available models: cogvideox-5b, cogvideox-2b, hunyuan-video, stable-video-diffusion, demo
|
| 321 |
+
2025-10-02 18:46:50,523 - __main__ - INFO - Default model: cogvideox-5b
|
| 322 |
+
2025-10-02 18:46:50,528 - werkzeug - INFO - [31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m
|
| 323 |
+
* Running on all addresses (0.0.0.0)
|
| 324 |
+
* Running on http://127.0.0.1:5000
|
| 325 |
+
* Running on http://192.168.4.190:5000
|
| 326 |
+
2025-10-02 18:46:50,528 - werkzeug - INFO - [33mPress CTRL+C to quit[0m
|
app_local.log
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
2025-10-02 18:37:57,423 - __main__ - INFO - ============================================================
|
| 2 |
+
2025-10-02 18:37:57,426 - __main__ - INFO - π Starting Local Video Generation Backend
|
| 3 |
+
2025-10-02 18:37:57,426 - __main__ - INFO - ============================================================
|
| 4 |
+
2025-10-02 18:37:57,426 - __main__ - INFO - Device: cpu
|
| 5 |
+
2025-10-02 18:37:57,427 - __main__ - INFO - GPU Available: False
|
| 6 |
+
2025-10-02 18:37:57,427 - __main__ - INFO - β οΈ No GPU detected - will run on CPU (slower)
|
| 7 |
+
2025-10-02 18:37:57,427 - __main__ - INFO - π‘ For faster generation, use a computer with NVIDIA GPU
|
| 8 |
+
2025-10-02 18:37:57,427 - __main__ - INFO - ============================================================
|
| 9 |
+
2025-10-02 18:37:57,427 - __main__ - INFO - π Model will be downloaded on first request (~5GB)
|
| 10 |
+
2025-10-02 18:37:57,427 - __main__ - INFO - π First generation will take longer (model loading)
|
| 11 |
+
2025-10-02 18:37:57,427 - __main__ - INFO - π Subsequent generations will be faster
|
| 12 |
+
2025-10-02 18:37:57,427 - __main__ - INFO - ============================================================
|
| 13 |
+
2025-10-02 18:37:57,427 - __main__ - INFO - π Starting server on http://localhost:5000
|
| 14 |
+
2025-10-02 18:37:57,427 - __main__ - INFO - ============================================================
|
| 15 |
+
2025-10-02 18:37:57,435 - werkzeug - INFO - [31m[1mWARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.[0m
|
| 16 |
+
* Running on all addresses (0.0.0.0)
|
| 17 |
+
* Running on http://127.0.0.1:5000
|
| 18 |
+
* Running on http://192.168.4.190:5000
|
| 19 |
+
2025-10-02 18:37:57,435 - werkzeug - INFO - [33mPress CTRL+C to quit[0m
|
| 20 |
+
2025-10-02 18:38:15,980 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:38:15] "GET /health HTTP/1.1" 200 -
|
| 21 |
+
2025-10-02 18:38:19,373 - __main__ - INFO - π€ Loading CogVideoX-2B model...
|
| 22 |
+
2025-10-02 18:38:19,373 - __main__ - INFO - β³ This may take 2-5 minutes on first run...
|
| 23 |
+
2025-10-02 18:38:56,298 - __main__ - INFO - β οΈ Running on CPU (will be slower)
|
| 24 |
+
2025-10-02 18:38:56,301 - __main__ - INFO - π‘ For faster generation, use a computer with NVIDIA GPU
|
| 25 |
+
2025-10-02 18:38:56,301 - __main__ - INFO - π¬ Model ready to generate videos!
|
| 26 |
+
2025-10-02 18:38:56,302 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:38:56] "POST /initialize HTTP/1.1" 200 -
|
| 27 |
+
2025-10-02 18:38:56,344 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:38:56] "GET /health HTTP/1.1" 200 -
|
| 28 |
+
2025-10-02 18:38:59,287 - werkzeug - INFO - 127.0.0.1 - - [02/Oct/2025 18:38:59] "OPTIONS /generate-video HTTP/1.1" 200 -
|
| 29 |
+
2025-10-02 18:38:59,745 - __main__ - INFO - π¨ Generating video for: City street with cars at night, neon lights
|
| 30 |
+
2025-10-02 18:39:00,291 - __main__ - INFO - β³ This will take 30-120 seconds depending on your hardware...
|
backend.py
ADDED
|
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from flask import Flask, request, jsonify
|
| 2 |
+
from flask_cors import CORS
|
| 3 |
+
from gradio_client import Client
|
| 4 |
+
import os
|
| 5 |
+
import logging
|
| 6 |
+
from dotenv import load_dotenv
|
| 7 |
+
from datetime import datetime
|
| 8 |
+
|
| 9 |
+
# Load environment variables
|
| 10 |
+
load_dotenv()
|
| 11 |
+
|
| 12 |
+
# Configure logging
|
| 13 |
+
logging.basicConfig(
|
| 14 |
+
level=logging.INFO,
|
| 15 |
+
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
|
| 16 |
+
handlers=[
|
| 17 |
+
logging.FileHandler('app.log'),
|
| 18 |
+
logging.StreamHandler()
|
| 19 |
+
]
|
| 20 |
+
)
|
| 21 |
+
logger = logging.getLogger(__name__)
|
| 22 |
+
|
| 23 |
+
app = Flask(__name__)
|
| 24 |
+
CORS(app) # Enable CORS for frontend requests
|
| 25 |
+
|
| 26 |
+
# Configuration from environment variables
|
| 27 |
+
HF_SPACE_URL = os.getenv('HF_SPACE_URL', 'https://cerspense-zeroscope-v2-xl.hf.space/')
|
| 28 |
+
FLASK_PORT = int(os.getenv('FLASK_PORT', 5000))
|
| 29 |
+
FLASK_DEBUG = os.getenv('FLASK_DEBUG', 'False').lower() == 'true'
|
| 30 |
+
|
| 31 |
+
# Constants
|
| 32 |
+
MAX_PROMPT_LENGTH = 500
|
| 33 |
+
MIN_PROMPT_LENGTH = 3
|
| 34 |
+
|
| 35 |
+
# Initialize client with error handling
|
| 36 |
+
try:
|
| 37 |
+
client = Client(HF_SPACE_URL)
|
| 38 |
+
logger.info(f"Successfully connected to Hugging Face Space: {HF_SPACE_URL}")
|
| 39 |
+
except Exception as e:
|
| 40 |
+
logger.error(f"Failed to initialize Gradio client: {str(e)}")
|
| 41 |
+
client = None
|
| 42 |
+
|
| 43 |
+
def validate_prompt(prompt):
|
| 44 |
+
"""Validate the input prompt."""
|
| 45 |
+
if not prompt or not isinstance(prompt, str):
|
| 46 |
+
return False, "Prompt must be a non-empty string"
|
| 47 |
+
|
| 48 |
+
prompt = prompt.strip()
|
| 49 |
+
|
| 50 |
+
if len(prompt) < MIN_PROMPT_LENGTH:
|
| 51 |
+
return False, f"Prompt must be at least {MIN_PROMPT_LENGTH} characters long"
|
| 52 |
+
|
| 53 |
+
if len(prompt) > MAX_PROMPT_LENGTH:
|
| 54 |
+
return False, f"Prompt must not exceed {MAX_PROMPT_LENGTH} characters"
|
| 55 |
+
|
| 56 |
+
return True, prompt
|
| 57 |
+
|
| 58 |
+
@app.route('/health', methods=['GET'])
|
| 59 |
+
def health_check():
|
| 60 |
+
"""Health check endpoint."""
|
| 61 |
+
return jsonify({
|
| 62 |
+
'status': 'healthy',
|
| 63 |
+
'timestamp': datetime.now().isoformat(),
|
| 64 |
+
'client_initialized': client is not None
|
| 65 |
+
})
|
| 66 |
+
|
| 67 |
+
@app.route('/generate-video', methods=['POST'])
|
| 68 |
+
def generate_video():
|
| 69 |
+
"""Generate video from text prompt."""
|
| 70 |
+
try:
|
| 71 |
+
# Check if client is initialized
|
| 72 |
+
if client is None:
|
| 73 |
+
logger.error("Gradio client not initialized")
|
| 74 |
+
return jsonify({'error': 'Service unavailable. Please check server configuration.'}), 503
|
| 75 |
+
|
| 76 |
+
# Validate request data
|
| 77 |
+
if not request.json:
|
| 78 |
+
return jsonify({'error': 'Request must be JSON'}), 400
|
| 79 |
+
|
| 80 |
+
data = request.json
|
| 81 |
+
prompt = data.get('prompt', '').strip()
|
| 82 |
+
|
| 83 |
+
# Validate prompt
|
| 84 |
+
is_valid, result = validate_prompt(prompt)
|
| 85 |
+
if not is_valid:
|
| 86 |
+
logger.warning(f"Invalid prompt: {result}")
|
| 87 |
+
return jsonify({'error': result}), 400
|
| 88 |
+
|
| 89 |
+
prompt = result
|
| 90 |
+
logger.info(f"Generating video for prompt: {prompt[:50]}...")
|
| 91 |
+
|
| 92 |
+
# Call the HF Space API with timeout handling
|
| 93 |
+
result = client.predict(
|
| 94 |
+
prompt, # Text prompt
|
| 95 |
+
8, # Number of frames (short video)
|
| 96 |
+
512, # Width
|
| 97 |
+
320, # Height
|
| 98 |
+
api_name="/predict"
|
| 99 |
+
)
|
| 100 |
+
|
| 101 |
+
# Extract video path/URL from result
|
| 102 |
+
video_path = result[0] if isinstance(result, list) else result
|
| 103 |
+
|
| 104 |
+
if not video_path:
|
| 105 |
+
logger.error("No video path returned from API")
|
| 106 |
+
return jsonify({'error': 'Failed to generate video. No output received.'}), 500
|
| 107 |
+
|
| 108 |
+
logger.info(f"Video generated successfully: {video_path}")
|
| 109 |
+
return jsonify({
|
| 110 |
+
'video_url': video_path,
|
| 111 |
+
'prompt': prompt,
|
| 112 |
+
'timestamp': datetime.now().isoformat()
|
| 113 |
+
})
|
| 114 |
+
|
| 115 |
+
except ValueError as e:
|
| 116 |
+
logger.error(f"Validation error: {str(e)}")
|
| 117 |
+
return jsonify({'error': f'Invalid input: {str(e)}'}), 400
|
| 118 |
+
|
| 119 |
+
except ConnectionError as e:
|
| 120 |
+
logger.error(f"Connection error: {str(e)}")
|
| 121 |
+
return jsonify({'error': 'Failed to connect to video generation service. Please try again later.'}), 503
|
| 122 |
+
|
| 123 |
+
except TimeoutError as e:
|
| 124 |
+
logger.error(f"Timeout error: {str(e)}")
|
| 125 |
+
return jsonify({'error': 'Request timed out. The service may be busy. Please try again.'}), 504
|
| 126 |
+
|
| 127 |
+
except Exception as e:
|
| 128 |
+
logger.error(f"Unexpected error in generate_video: {str(e)}", exc_info=True)
|
| 129 |
+
return jsonify({'error': 'An unexpected error occurred. Please try again later.'}), 500
|
| 130 |
+
|
| 131 |
+
@app.errorhandler(404)
|
| 132 |
+
def not_found(e):
|
| 133 |
+
"""Handle 404 errors."""
|
| 134 |
+
return jsonify({'error': 'Endpoint not found'}), 404
|
| 135 |
+
|
| 136 |
+
@app.errorhandler(405)
|
| 137 |
+
def method_not_allowed(e):
|
| 138 |
+
"""Handle 405 errors."""
|
| 139 |
+
return jsonify({'error': 'Method not allowed'}), 405
|
| 140 |
+
|
| 141 |
+
@app.errorhandler(500)
|
| 142 |
+
def internal_error(e):
|
| 143 |
+
"""Handle 500 errors."""
|
| 144 |
+
logger.error(f"Internal server error: {str(e)}")
|
| 145 |
+
return jsonify({'error': 'Internal server error'}), 500
|
| 146 |
+
|
| 147 |
+
if __name__ == '__main__':
|
| 148 |
+
logger.info(f"Starting Flask server on port {FLASK_PORT} (debug={FLASK_DEBUG})")
|
| 149 |
+
app.run(host='0.0.0.0', port=FLASK_PORT, debug=FLASK_DEBUG)
|
backend_enhanced.py
ADDED
|
@@ -0,0 +1,368 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from flask import Flask, request, jsonify, send_file
|
| 2 |
+
from flask_cors import CORS
|
| 3 |
+
from gradio_client import Client
|
| 4 |
+
import os
|
| 5 |
+
import logging
|
| 6 |
+
from dotenv import load_dotenv
|
| 7 |
+
from datetime import datetime
|
| 8 |
+
import base64
|
| 9 |
+
from io import BytesIO
|
| 10 |
+
from PIL import Image
|
| 11 |
+
import tempfile
|
| 12 |
+
|
| 13 |
+
from models_config import (
|
| 14 |
+
VIDEO_MODELS,
|
| 15 |
+
CAMERA_MOVEMENTS,
|
| 16 |
+
VISUAL_EFFECTS,
|
| 17 |
+
VIDEO_STYLES,
|
| 18 |
+
EXAMPLE_PROMPTS,
|
| 19 |
+
get_model_info,
|
| 20 |
+
get_available_models,
|
| 21 |
+
build_enhanced_prompt
|
| 22 |
+
)
|
| 23 |
+
|
| 24 |
+
# Load environment variables
|
| 25 |
+
load_dotenv()
|
| 26 |
+
|
| 27 |
+
# Configure logging
|
| 28 |
+
logging.basicConfig(
|
| 29 |
+
level=logging.INFO,
|
| 30 |
+
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
|
| 31 |
+
handlers=[
|
| 32 |
+
logging.FileHandler('app.log'),
|
| 33 |
+
logging.StreamHandler()
|
| 34 |
+
]
|
| 35 |
+
)
|
| 36 |
+
logger = logging.getLogger(__name__)
|
| 37 |
+
|
| 38 |
+
app = Flask(__name__)
|
| 39 |
+
CORS(app)
|
| 40 |
+
|
| 41 |
+
# Configuration
|
| 42 |
+
FLASK_PORT = int(os.getenv('FLASK_PORT', 5000))
|
| 43 |
+
FLASK_DEBUG = os.getenv('FLASK_DEBUG', 'False').lower() == 'true'
|
| 44 |
+
DEFAULT_MODEL = os.getenv('DEFAULT_MODEL', 'cogvideox-5b')
|
| 45 |
+
|
| 46 |
+
# Constants
|
| 47 |
+
MAX_PROMPT_LENGTH = 1000
|
| 48 |
+
MIN_PROMPT_LENGTH = 3
|
| 49 |
+
|
| 50 |
+
# Model clients cache
|
| 51 |
+
model_clients = {}
|
| 52 |
+
|
| 53 |
+
def get_or_create_client(model_id):
|
| 54 |
+
"""Get or create a Gradio client for the specified model"""
|
| 55 |
+
if model_id not in model_clients:
|
| 56 |
+
try:
|
| 57 |
+
model_info = get_model_info(model_id)
|
| 58 |
+
space_url = model_info['space_url']
|
| 59 |
+
logger.info(f"Initializing client for {model_id}: {space_url}")
|
| 60 |
+
|
| 61 |
+
# Try to connect with timeout
|
| 62 |
+
model_clients[model_id] = Client(space_url, verbose=False)
|
| 63 |
+
logger.info(f"Successfully connected to {model_id}")
|
| 64 |
+
except Exception as e:
|
| 65 |
+
logger.error(f"Failed to initialize client for {model_id}: {str(e)}")
|
| 66 |
+
logger.error(f"This might be because:")
|
| 67 |
+
logger.error(f" 1. The Hugging Face Space is not available or sleeping")
|
| 68 |
+
logger.error(f" 2. The Space URL has changed")
|
| 69 |
+
logger.error(f" 3. The Space requires authentication")
|
| 70 |
+
logger.error(f" 4. Network connectivity issues")
|
| 71 |
+
return None
|
| 72 |
+
return model_clients.get(model_id)
|
| 73 |
+
|
| 74 |
+
def validate_prompt(prompt):
|
| 75 |
+
"""Validate the input prompt"""
|
| 76 |
+
if not prompt or not isinstance(prompt, str):
|
| 77 |
+
return False, "Prompt must be a non-empty string"
|
| 78 |
+
|
| 79 |
+
prompt = prompt.strip()
|
| 80 |
+
|
| 81 |
+
if len(prompt) < MIN_PROMPT_LENGTH:
|
| 82 |
+
return False, f"Prompt must be at least {MIN_PROMPT_LENGTH} characters long"
|
| 83 |
+
|
| 84 |
+
if len(prompt) > MAX_PROMPT_LENGTH:
|
| 85 |
+
return False, f"Prompt must not exceed {MAX_PROMPT_LENGTH} characters"
|
| 86 |
+
|
| 87 |
+
return True, prompt
|
| 88 |
+
|
| 89 |
+
def decode_base64_image(base64_string):
|
| 90 |
+
"""Decode base64 image string to PIL Image"""
|
| 91 |
+
try:
|
| 92 |
+
# Remove data URL prefix if present
|
| 93 |
+
if ',' in base64_string:
|
| 94 |
+
base64_string = base64_string.split(',')[1]
|
| 95 |
+
|
| 96 |
+
image_data = base64.b64decode(base64_string)
|
| 97 |
+
image = Image.open(BytesIO(image_data))
|
| 98 |
+
return image
|
| 99 |
+
except Exception as e:
|
| 100 |
+
logger.error(f"Failed to decode image: {str(e)}")
|
| 101 |
+
return None
|
| 102 |
+
|
| 103 |
+
@app.route('/health', methods=['GET'])
|
| 104 |
+
def health_check():
|
| 105 |
+
"""Health check endpoint"""
|
| 106 |
+
return jsonify({
|
| 107 |
+
'status': 'healthy',
|
| 108 |
+
'timestamp': datetime.now().isoformat(),
|
| 109 |
+
'available_models': list(VIDEO_MODELS.keys()),
|
| 110 |
+
'default_model': DEFAULT_MODEL
|
| 111 |
+
})
|
| 112 |
+
|
| 113 |
+
@app.route('/models', methods=['GET'])
|
| 114 |
+
def list_models():
|
| 115 |
+
"""List all available video generation models"""
|
| 116 |
+
return jsonify({
|
| 117 |
+
'models': get_available_models(),
|
| 118 |
+
'camera_movements': CAMERA_MOVEMENTS,
|
| 119 |
+
'visual_effects': VISUAL_EFFECTS,
|
| 120 |
+
'video_styles': VIDEO_STYLES,
|
| 121 |
+
'example_prompts': EXAMPLE_PROMPTS
|
| 122 |
+
})
|
| 123 |
+
|
| 124 |
+
@app.route('/test-video', methods=['POST'])
|
| 125 |
+
def test_video():
|
| 126 |
+
"""Test endpoint that returns a sample video URL for UI testing"""
|
| 127 |
+
data = request.json
|
| 128 |
+
prompt = data.get('prompt', 'Test prompt')
|
| 129 |
+
|
| 130 |
+
logger.info(f"Test mode: Simulating video generation for: {prompt[:100]}")
|
| 131 |
+
|
| 132 |
+
# Return a sample video URL (Big Buck Bunny - open source test video)
|
| 133 |
+
return jsonify({
|
| 134 |
+
'video_url': 'https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4',
|
| 135 |
+
'prompt': prompt,
|
| 136 |
+
'enhanced_prompt': prompt,
|
| 137 |
+
'model': 'test-mode',
|
| 138 |
+
'model_name': 'Test Mode (Demo Video)',
|
| 139 |
+
'timestamp': datetime.now().isoformat(),
|
| 140 |
+
'note': 'This is a demo video. Connect to Hugging Face Spaces for real generation.'
|
| 141 |
+
})
|
| 142 |
+
|
| 143 |
+
@app.route('/generate-video', methods=['POST'])
|
| 144 |
+
def generate_video():
|
| 145 |
+
"""Generate video from text prompt with advanced options"""
|
| 146 |
+
try:
|
| 147 |
+
# Validate request data
|
| 148 |
+
if not request.json:
|
| 149 |
+
return jsonify({'error': 'Request must be JSON'}), 400
|
| 150 |
+
|
| 151 |
+
data = request.json
|
| 152 |
+
base_prompt = data.get('prompt', '').strip()
|
| 153 |
+
model_id = data.get('model', DEFAULT_MODEL)
|
| 154 |
+
|
| 155 |
+
# Advanced options (Hailuo-inspired)
|
| 156 |
+
camera_movement = data.get('camera_movement', '')
|
| 157 |
+
visual_effect = data.get('visual_effect', '')
|
| 158 |
+
style = data.get('style', '')
|
| 159 |
+
|
| 160 |
+
# Validate prompt
|
| 161 |
+
is_valid, result = validate_prompt(base_prompt)
|
| 162 |
+
if not is_valid:
|
| 163 |
+
logger.warning(f"Invalid prompt: {result}")
|
| 164 |
+
return jsonify({'error': result}), 400
|
| 165 |
+
|
| 166 |
+
base_prompt = result
|
| 167 |
+
|
| 168 |
+
# Validate model
|
| 169 |
+
if model_id not in VIDEO_MODELS:
|
| 170 |
+
return jsonify({'error': f'Invalid model: {model_id}'}), 400
|
| 171 |
+
|
| 172 |
+
model_info = get_model_info(model_id)
|
| 173 |
+
|
| 174 |
+
# Check if model supports text-to-video
|
| 175 |
+
if model_info['type'] != 'text-to-video':
|
| 176 |
+
return jsonify({'error': f'Model {model_id} does not support text-to-video generation'}), 400
|
| 177 |
+
|
| 178 |
+
# Build enhanced prompt with camera movements and effects
|
| 179 |
+
enhanced_prompt = build_enhanced_prompt(base_prompt, camera_movement, visual_effect, style)
|
| 180 |
+
|
| 181 |
+
logger.info(f"Generating video with {model_id}")
|
| 182 |
+
logger.info(f"Base prompt: {base_prompt[:100]}...")
|
| 183 |
+
logger.info(f"Enhanced prompt: {enhanced_prompt[:150]}...")
|
| 184 |
+
|
| 185 |
+
# Handle demo mode specially
|
| 186 |
+
if model_id == 'demo':
|
| 187 |
+
logger.info("Demo mode activated - returning sample video")
|
| 188 |
+
return jsonify({
|
| 189 |
+
'video_url': 'https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4',
|
| 190 |
+
'prompt': base_prompt,
|
| 191 |
+
'enhanced_prompt': enhanced_prompt,
|
| 192 |
+
'model': model_id,
|
| 193 |
+
'model_name': model_info['name'],
|
| 194 |
+
'timestamp': datetime.now().isoformat(),
|
| 195 |
+
'note': 'Demo mode: This is a sample video. Select a real model for AI generation.'
|
| 196 |
+
})
|
| 197 |
+
|
| 198 |
+
# Get or create client
|
| 199 |
+
client = get_or_create_client(model_id)
|
| 200 |
+
if client is None:
|
| 201 |
+
return jsonify({'error': 'Failed to connect to video generation service. Try using "Demo Mode" model to test the UI.'}), 503
|
| 202 |
+
|
| 203 |
+
# Generate video based on model type
|
| 204 |
+
try:
|
| 205 |
+
if model_id in ['cogvideox-5b', 'cogvideox-2b']:
|
| 206 |
+
# CogVideoX models - prompt with seed and other params
|
| 207 |
+
logger.info(f"Calling CogVideoX {model_id} with prompt: {enhanced_prompt[:100]}")
|
| 208 |
+
result = client.predict(
|
| 209 |
+
prompt=enhanced_prompt,
|
| 210 |
+
seed=0, # Random seed
|
| 211 |
+
api_name=model_info['api_name']
|
| 212 |
+
)
|
| 213 |
+
elif model_id == 'hunyuan-video':
|
| 214 |
+
# HunyuanVideo model
|
| 215 |
+
logger.info(f"Calling HunyuanVideo with prompt: {enhanced_prompt[:100]}")
|
| 216 |
+
result = client.predict(
|
| 217 |
+
enhanced_prompt,
|
| 218 |
+
api_name=model_info['api_name']
|
| 219 |
+
)
|
| 220 |
+
else:
|
| 221 |
+
# Generic approach for other models
|
| 222 |
+
logger.info(f"Calling {model_id} with generic approach")
|
| 223 |
+
result = client.predict(
|
| 224 |
+
enhanced_prompt,
|
| 225 |
+
api_name=model_info['api_name']
|
| 226 |
+
)
|
| 227 |
+
except Exception as e:
|
| 228 |
+
logger.error(f"Model API call failed: {str(e)}")
|
| 229 |
+
logger.error(f"This usually means:")
|
| 230 |
+
logger.error(f" 1. The Hugging Face Space is sleeping or unavailable")
|
| 231 |
+
logger.error(f" 2. The API has changed")
|
| 232 |
+
logger.error(f" 3. Try using 'Demo Mode' to test the UI")
|
| 233 |
+
return jsonify({'error': f'Video generation failed: {str(e)}. Try Demo Mode or a different model.'}), 500
|
| 234 |
+
|
| 235 |
+
# Extract video path/URL from result
|
| 236 |
+
video_path = result[0] if isinstance(result, list) else result
|
| 237 |
+
|
| 238 |
+
if not video_path:
|
| 239 |
+
logger.error("No video path returned from API")
|
| 240 |
+
return jsonify({'error': 'Failed to generate video. No output received.'}), 500
|
| 241 |
+
|
| 242 |
+
logger.info(f"Video generated successfully: {video_path}")
|
| 243 |
+
return jsonify({
|
| 244 |
+
'video_url': video_path,
|
| 245 |
+
'prompt': base_prompt,
|
| 246 |
+
'enhanced_prompt': enhanced_prompt,
|
| 247 |
+
'model': model_id,
|
| 248 |
+
'model_name': model_info['name'],
|
| 249 |
+
'timestamp': datetime.now().isoformat()
|
| 250 |
+
})
|
| 251 |
+
|
| 252 |
+
except ValueError as e:
|
| 253 |
+
logger.error(f"Validation error: {str(e)}")
|
| 254 |
+
return jsonify({'error': f'Invalid input: {str(e)}'}), 400
|
| 255 |
+
|
| 256 |
+
except ConnectionError as e:
|
| 257 |
+
logger.error(f"Connection error: {str(e)}")
|
| 258 |
+
return jsonify({'error': 'Failed to connect to video generation service. Please try again later.'}), 503
|
| 259 |
+
|
| 260 |
+
except TimeoutError as e:
|
| 261 |
+
logger.error(f"Timeout error: {str(e)}")
|
| 262 |
+
return jsonify({'error': 'Request timed out. The service may be busy. Please try again.'}), 504
|
| 263 |
+
|
| 264 |
+
except Exception as e:
|
| 265 |
+
logger.error(f"Unexpected error in generate_video: {str(e)}", exc_info=True)
|
| 266 |
+
return jsonify({'error': 'An unexpected error occurred. Please try again later.'}), 500
|
| 267 |
+
|
| 268 |
+
@app.route('/generate-video-from-image', methods=['POST'])
|
| 269 |
+
def generate_video_from_image():
|
| 270 |
+
"""Generate video from image with text prompt (Image-to-Video)"""
|
| 271 |
+
try:
|
| 272 |
+
# Validate request data
|
| 273 |
+
if not request.json:
|
| 274 |
+
return jsonify({'error': 'Request must be JSON'}), 400
|
| 275 |
+
|
| 276 |
+
data = request.json
|
| 277 |
+
prompt = data.get('prompt', '').strip()
|
| 278 |
+
image_data = data.get('image', '')
|
| 279 |
+
model_id = data.get('model', 'stable-video-diffusion')
|
| 280 |
+
|
| 281 |
+
# Validate model supports image-to-video
|
| 282 |
+
model_info = get_model_info(model_id)
|
| 283 |
+
if model_info['type'] != 'image-to-video':
|
| 284 |
+
return jsonify({'error': f'Model {model_id} does not support image-to-video generation'}), 400
|
| 285 |
+
|
| 286 |
+
# Decode image
|
| 287 |
+
image = decode_base64_image(image_data)
|
| 288 |
+
if image is None:
|
| 289 |
+
return jsonify({'error': 'Invalid image data'}), 400
|
| 290 |
+
|
| 291 |
+
# Save image to temporary file
|
| 292 |
+
with tempfile.NamedTemporaryFile(delete=False, suffix='.png') as tmp_file:
|
| 293 |
+
image.save(tmp_file.name)
|
| 294 |
+
temp_image_path = tmp_file.name
|
| 295 |
+
|
| 296 |
+
logger.info(f"Generating video from image with {model_id}")
|
| 297 |
+
logger.info(f"Prompt: {prompt[:100]}...")
|
| 298 |
+
|
| 299 |
+
# Get or create client
|
| 300 |
+
client = get_or_create_client(model_id)
|
| 301 |
+
if client is None:
|
| 302 |
+
os.unlink(temp_image_path)
|
| 303 |
+
return jsonify({'error': 'Failed to connect to video generation service'}), 503
|
| 304 |
+
|
| 305 |
+
# Generate video
|
| 306 |
+
try:
|
| 307 |
+
if model_id == 'stable-video-diffusion':
|
| 308 |
+
result = client.predict(
|
| 309 |
+
temp_image_path,
|
| 310 |
+
api_name=model_info['api_name']
|
| 311 |
+
)
|
| 312 |
+
elif model_id == 'animatediff':
|
| 313 |
+
result = client.predict(
|
| 314 |
+
temp_image_path,
|
| 315 |
+
prompt,
|
| 316 |
+
api_name=model_info['api_name']
|
| 317 |
+
)
|
| 318 |
+
else:
|
| 319 |
+
result = client.predict(
|
| 320 |
+
temp_image_path,
|
| 321 |
+
prompt,
|
| 322 |
+
api_name=model_info['api_name']
|
| 323 |
+
)
|
| 324 |
+
finally:
|
| 325 |
+
# Clean up temp file
|
| 326 |
+
if os.path.exists(temp_image_path):
|
| 327 |
+
os.unlink(temp_image_path)
|
| 328 |
+
|
| 329 |
+
# Extract video path
|
| 330 |
+
video_path = result[0] if isinstance(result, list) else result
|
| 331 |
+
|
| 332 |
+
if not video_path:
|
| 333 |
+
return jsonify({'error': 'Failed to generate video from image'}), 500
|
| 334 |
+
|
| 335 |
+
logger.info(f"Video generated from image successfully")
|
| 336 |
+
return jsonify({
|
| 337 |
+
'video_url': video_path,
|
| 338 |
+
'prompt': prompt,
|
| 339 |
+
'model': model_id,
|
| 340 |
+
'model_name': model_info['name'],
|
| 341 |
+
'timestamp': datetime.now().isoformat()
|
| 342 |
+
})
|
| 343 |
+
|
| 344 |
+
except Exception as e:
|
| 345 |
+
logger.error(f"Error in generate_video_from_image: {str(e)}", exc_info=True)
|
| 346 |
+
return jsonify({'error': f'An error occurred: {str(e)}'}), 500
|
| 347 |
+
|
| 348 |
+
@app.errorhandler(404)
|
| 349 |
+
def not_found(e):
|
| 350 |
+
"""Handle 404 errors"""
|
| 351 |
+
return jsonify({'error': 'Endpoint not found'}), 404
|
| 352 |
+
|
| 353 |
+
@app.errorhandler(405)
|
| 354 |
+
def method_not_allowed(e):
|
| 355 |
+
"""Handle 405 errors"""
|
| 356 |
+
return jsonify({'error': 'Method not allowed'}), 405
|
| 357 |
+
|
| 358 |
+
@app.errorhandler(500)
|
| 359 |
+
def internal_error(e):
|
| 360 |
+
"""Handle 500 errors"""
|
| 361 |
+
logger.error(f"Internal server error: {str(e)}")
|
| 362 |
+
return jsonify({'error': 'Internal server error'}), 500
|
| 363 |
+
|
| 364 |
+
if __name__ == '__main__':
|
| 365 |
+
logger.info(f"Starting Enhanced Flask server on port {FLASK_PORT} (debug={FLASK_DEBUG})")
|
| 366 |
+
logger.info(f"Available models: {', '.join(VIDEO_MODELS.keys())}")
|
| 367 |
+
logger.info(f"Default model: {DEFAULT_MODEL}")
|
| 368 |
+
app.run(host='0.0.0.0', port=FLASK_PORT, debug=FLASK_DEBUG)
|
backend_local.py
ADDED
|
@@ -0,0 +1,239 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Local Video Generation Backend
|
| 3 |
+
Uses diffusers library to run models locally on your computer
|
| 4 |
+
Based on the VideoAI_Free_Colab.ipynb notebook
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from flask import Flask, request, jsonify, send_file
|
| 8 |
+
from flask_cors import CORS
|
| 9 |
+
import torch
|
| 10 |
+
from diffusers import CogVideoXPipeline
|
| 11 |
+
from diffusers.utils import export_to_video
|
| 12 |
+
import os
|
| 13 |
+
import logging
|
| 14 |
+
from datetime import datetime
|
| 15 |
+
import tempfile
|
| 16 |
+
import uuid
|
| 17 |
+
|
| 18 |
+
# Configure logging
|
| 19 |
+
logging.basicConfig(
|
| 20 |
+
level=logging.INFO,
|
| 21 |
+
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
|
| 22 |
+
handlers=[
|
| 23 |
+
logging.FileHandler('app_local.log'),
|
| 24 |
+
logging.StreamHandler()
|
| 25 |
+
]
|
| 26 |
+
)
|
| 27 |
+
logger = logging.getLogger(__name__)
|
| 28 |
+
|
| 29 |
+
app = Flask(__name__)
|
| 30 |
+
CORS(app)
|
| 31 |
+
|
| 32 |
+
# Configuration
|
| 33 |
+
FLASK_PORT = int(os.getenv('FLASK_PORT', 5000))
|
| 34 |
+
OUTPUT_DIR = "generated_videos"
|
| 35 |
+
os.makedirs(OUTPUT_DIR, exist_ok=True)
|
| 36 |
+
|
| 37 |
+
# Global pipeline variable
|
| 38 |
+
pipeline = None
|
| 39 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 40 |
+
|
| 41 |
+
def initialize_model():
|
| 42 |
+
"""Initialize the CogVideoX model"""
|
| 43 |
+
global pipeline
|
| 44 |
+
|
| 45 |
+
if pipeline is not None:
|
| 46 |
+
logger.info("Model already loaded")
|
| 47 |
+
return True
|
| 48 |
+
|
| 49 |
+
try:
|
| 50 |
+
logger.info("π€ Loading CogVideoX-2B model...")
|
| 51 |
+
logger.info("β³ This may take 2-5 minutes on first run...")
|
| 52 |
+
|
| 53 |
+
# Use CogVideoX-2B (smaller, faster)
|
| 54 |
+
pipeline = CogVideoXPipeline.from_pretrained(
|
| 55 |
+
"THUDM/CogVideoX-2b",
|
| 56 |
+
torch_dtype=torch.float16 if device == "cuda" else torch.float32,
|
| 57 |
+
use_safetensors=True
|
| 58 |
+
)
|
| 59 |
+
|
| 60 |
+
if device == "cuda":
|
| 61 |
+
pipeline.to("cuda")
|
| 62 |
+
logger.info("β
Model loaded on GPU!")
|
| 63 |
+
else:
|
| 64 |
+
logger.info("β οΈ Running on CPU (will be slower)")
|
| 65 |
+
logger.info("π‘ For faster generation, use a computer with NVIDIA GPU")
|
| 66 |
+
|
| 67 |
+
logger.info("π¬ Model ready to generate videos!")
|
| 68 |
+
return True
|
| 69 |
+
|
| 70 |
+
except Exception as e:
|
| 71 |
+
logger.error(f"Failed to load model: {str(e)}")
|
| 72 |
+
return False
|
| 73 |
+
|
| 74 |
+
@app.route('/health', methods=['GET'])
|
| 75 |
+
def health():
|
| 76 |
+
"""Health check endpoint"""
|
| 77 |
+
model_loaded = pipeline is not None
|
| 78 |
+
return jsonify({
|
| 79 |
+
'status': 'healthy',
|
| 80 |
+
'model_loaded': model_loaded,
|
| 81 |
+
'device': device,
|
| 82 |
+
'gpu_available': torch.cuda.is_available(),
|
| 83 |
+
'timestamp': datetime.now().isoformat()
|
| 84 |
+
})
|
| 85 |
+
|
| 86 |
+
@app.route('/models', methods=['GET'])
|
| 87 |
+
def list_models():
|
| 88 |
+
"""List available models"""
|
| 89 |
+
return jsonify({
|
| 90 |
+
'models': {
|
| 91 |
+
'cogvideox-2b-local': {
|
| 92 |
+
'name': 'CogVideoX-2B (Local)',
|
| 93 |
+
'description': 'Running locally on your computer',
|
| 94 |
+
'type': 'text-to-video'
|
| 95 |
+
}
|
| 96 |
+
},
|
| 97 |
+
'device': device,
|
| 98 |
+
'gpu_available': torch.cuda.is_available()
|
| 99 |
+
})
|
| 100 |
+
|
| 101 |
+
@app.route('/generate-video', methods=['POST'])
|
| 102 |
+
def generate_video():
|
| 103 |
+
"""Generate video from text prompt"""
|
| 104 |
+
try:
|
| 105 |
+
# Initialize model if not already loaded
|
| 106 |
+
if pipeline is None:
|
| 107 |
+
logger.info("Model not loaded, initializing...")
|
| 108 |
+
if not initialize_model():
|
| 109 |
+
return jsonify({
|
| 110 |
+
'error': 'Failed to load model. Check logs for details.'
|
| 111 |
+
}), 500
|
| 112 |
+
|
| 113 |
+
# Get request data
|
| 114 |
+
data = request.json
|
| 115 |
+
prompt = data.get('prompt', '').strip()
|
| 116 |
+
|
| 117 |
+
if not prompt:
|
| 118 |
+
return jsonify({'error': 'Prompt is required'}), 400
|
| 119 |
+
|
| 120 |
+
if len(prompt) < 3:
|
| 121 |
+
return jsonify({'error': 'Prompt must be at least 3 characters'}), 400
|
| 122 |
+
|
| 123 |
+
logger.info(f"π¨ Generating video for: {prompt[:100]}")
|
| 124 |
+
logger.info(f"β³ This will take 30-120 seconds depending on your hardware...")
|
| 125 |
+
|
| 126 |
+
# Generate video
|
| 127 |
+
num_frames = 49 # ~6 seconds at 8 fps
|
| 128 |
+
|
| 129 |
+
video_frames = pipeline(
|
| 130 |
+
prompt=prompt,
|
| 131 |
+
num_frames=num_frames,
|
| 132 |
+
guidance_scale=6.0,
|
| 133 |
+
num_inference_steps=50
|
| 134 |
+
).frames[0]
|
| 135 |
+
|
| 136 |
+
# Save video
|
| 137 |
+
video_id = str(uuid.uuid4())
|
| 138 |
+
output_path = os.path.join(OUTPUT_DIR, f"{video_id}.mp4")
|
| 139 |
+
export_to_video(video_frames, output_path, fps=8)
|
| 140 |
+
|
| 141 |
+
logger.info(f"β
Video generated successfully: {output_path}")
|
| 142 |
+
|
| 143 |
+
# Return video URL
|
| 144 |
+
video_url = f"/download/{video_id}.mp4"
|
| 145 |
+
|
| 146 |
+
return jsonify({
|
| 147 |
+
'video_url': video_url,
|
| 148 |
+
'prompt': prompt,
|
| 149 |
+
'model': 'cogvideox-2b-local',
|
| 150 |
+
'model_name': 'CogVideoX-2B (Local)',
|
| 151 |
+
'device': device,
|
| 152 |
+
'num_frames': num_frames,
|
| 153 |
+
'timestamp': datetime.now().isoformat()
|
| 154 |
+
})
|
| 155 |
+
|
| 156 |
+
except torch.cuda.OutOfMemoryError:
|
| 157 |
+
logger.error("GPU out of memory!")
|
| 158 |
+
return jsonify({
|
| 159 |
+
'error': 'GPU out of memory. Try closing other applications or use a shorter prompt.'
|
| 160 |
+
}), 500
|
| 161 |
+
|
| 162 |
+
except Exception as e:
|
| 163 |
+
logger.error(f"Error generating video: {str(e)}", exc_info=True)
|
| 164 |
+
return jsonify({
|
| 165 |
+
'error': f'Video generation failed: {str(e)}'
|
| 166 |
+
}), 500
|
| 167 |
+
|
| 168 |
+
@app.route('/download/<filename>', methods=['GET'])
|
| 169 |
+
def download_video(filename):
|
| 170 |
+
"""Download generated video"""
|
| 171 |
+
try:
|
| 172 |
+
file_path = os.path.join(OUTPUT_DIR, filename)
|
| 173 |
+
|
| 174 |
+
if not os.path.exists(file_path):
|
| 175 |
+
return jsonify({'error': 'Video not found'}), 404
|
| 176 |
+
|
| 177 |
+
return send_file(
|
| 178 |
+
file_path,
|
| 179 |
+
mimetype='video/mp4',
|
| 180 |
+
as_attachment=False,
|
| 181 |
+
download_name=filename
|
| 182 |
+
)
|
| 183 |
+
|
| 184 |
+
except Exception as e:
|
| 185 |
+
logger.error(f"Error serving video: {str(e)}")
|
| 186 |
+
return jsonify({'error': 'Failed to serve video'}), 500
|
| 187 |
+
|
| 188 |
+
@app.route('/initialize', methods=['POST'])
|
| 189 |
+
def initialize():
|
| 190 |
+
"""Manually initialize the model"""
|
| 191 |
+
if initialize_model():
|
| 192 |
+
return jsonify({
|
| 193 |
+
'status': 'success',
|
| 194 |
+
'message': 'Model loaded successfully',
|
| 195 |
+
'device': device
|
| 196 |
+
})
|
| 197 |
+
else:
|
| 198 |
+
return jsonify({
|
| 199 |
+
'status': 'error',
|
| 200 |
+
'message': 'Failed to load model'
|
| 201 |
+
}), 500
|
| 202 |
+
|
| 203 |
+
@app.errorhandler(404)
|
| 204 |
+
def not_found(e):
|
| 205 |
+
return jsonify({'error': 'Endpoint not found'}), 404
|
| 206 |
+
|
| 207 |
+
@app.errorhandler(500)
|
| 208 |
+
def internal_error(e):
|
| 209 |
+
logger.error(f"Internal server error: {str(e)}")
|
| 210 |
+
return jsonify({'error': 'Internal server error'}), 500
|
| 211 |
+
|
| 212 |
+
if __name__ == '__main__':
|
| 213 |
+
logger.info("=" * 60)
|
| 214 |
+
logger.info("π Starting Local Video Generation Backend")
|
| 215 |
+
logger.info("=" * 60)
|
| 216 |
+
logger.info(f"Device: {device}")
|
| 217 |
+
logger.info(f"GPU Available: {torch.cuda.is_available()}")
|
| 218 |
+
|
| 219 |
+
if torch.cuda.is_available():
|
| 220 |
+
logger.info(f"GPU: {torch.cuda.get_device_name(0)}")
|
| 221 |
+
logger.info(f"GPU Memory: {torch.cuda.get_device_properties(0).total_memory / 1e9:.2f} GB")
|
| 222 |
+
else:
|
| 223 |
+
logger.info("β οΈ No GPU detected - will run on CPU (slower)")
|
| 224 |
+
logger.info("π‘ For faster generation, use a computer with NVIDIA GPU")
|
| 225 |
+
|
| 226 |
+
logger.info("=" * 60)
|
| 227 |
+
logger.info("π Model will be downloaded on first request (~5GB)")
|
| 228 |
+
logger.info("π First generation will take longer (model loading)")
|
| 229 |
+
logger.info("π Subsequent generations will be faster")
|
| 230 |
+
logger.info("=" * 60)
|
| 231 |
+
|
| 232 |
+
# Optionally pre-load model (uncomment to load on startup)
|
| 233 |
+
# logger.info("Pre-loading model...")
|
| 234 |
+
# initialize_model()
|
| 235 |
+
|
| 236 |
+
logger.info(f"π Starting server on http://localhost:{FLASK_PORT}")
|
| 237 |
+
logger.info("=" * 60)
|
| 238 |
+
|
| 239 |
+
app.run(host='0.0.0.0', port=FLASK_PORT, debug=False)
|
backend_replicate.py
ADDED
|
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from flask import Flask, request, jsonify
|
| 2 |
+
from flask_cors import CORS
|
| 3 |
+
import replicate
|
| 4 |
+
import os
|
| 5 |
+
from dotenv import load_dotenv
|
| 6 |
+
from datetime import datetime
|
| 7 |
+
import logging
|
| 8 |
+
|
| 9 |
+
load_dotenv()
|
| 10 |
+
|
| 11 |
+
logging.basicConfig(
|
| 12 |
+
level=logging.INFO,
|
| 13 |
+
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
| 14 |
+
)
|
| 15 |
+
logger = logging.getLogger(__name__)
|
| 16 |
+
|
| 17 |
+
app = Flask(__name__)
|
| 18 |
+
CORS(app)
|
| 19 |
+
|
| 20 |
+
# Set your Replicate API token
|
| 21 |
+
REPLICATE_API_TOKEN = os.getenv('REPLICATE_API_TOKEN')
|
| 22 |
+
if REPLICATE_API_TOKEN:
|
| 23 |
+
os.environ['REPLICATE_API_TOKEN'] = REPLICATE_API_TOKEN
|
| 24 |
+
|
| 25 |
+
@app.route('/health', methods=['GET'])
|
| 26 |
+
def health():
|
| 27 |
+
has_token = bool(REPLICATE_API_TOKEN)
|
| 28 |
+
return jsonify({
|
| 29 |
+
'status': 'healthy',
|
| 30 |
+
'service': 'Replicate API',
|
| 31 |
+
'token_configured': has_token
|
| 32 |
+
})
|
| 33 |
+
|
| 34 |
+
@app.route('/models', methods=['GET'])
|
| 35 |
+
def list_models():
|
| 36 |
+
"""Return available models"""
|
| 37 |
+
return jsonify({
|
| 38 |
+
'models': {
|
| 39 |
+
'hailuo': {
|
| 40 |
+
'name': 'Hailuo Video-01 (MiniMax) - 6s',
|
| 41 |
+
'description': 'Real Hailuo model - High quality, 6 seconds',
|
| 42 |
+
'type': 'text-to-video',
|
| 43 |
+
'duration': '6s'
|
| 44 |
+
},
|
| 45 |
+
'cogvideox': {
|
| 46 |
+
'name': 'CogVideoX-5B - 6s',
|
| 47 |
+
'description': 'High quality text-to-video, 6 seconds',
|
| 48 |
+
'type': 'text-to-video',
|
| 49 |
+
'duration': '6s'
|
| 50 |
+
},
|
| 51 |
+
'hunyuan': {
|
| 52 |
+
'name': 'HunyuanVideo (Tencent) - 5s+',
|
| 53 |
+
'description': 'State-of-the-art by Tencent, 5+ seconds',
|
| 54 |
+
'type': 'text-to-video',
|
| 55 |
+
'duration': '5s+'
|
| 56 |
+
},
|
| 57 |
+
'luma': {
|
| 58 |
+
'name': 'Luma Dream Machine - 5s',
|
| 59 |
+
'description': 'Cinematic quality, 5 seconds',
|
| 60 |
+
'type': 'text-to-video',
|
| 61 |
+
'duration': '5s'
|
| 62 |
+
},
|
| 63 |
+
'runway': {
|
| 64 |
+
'name': 'Runway Gen-3 - 10s β',
|
| 65 |
+
'description': 'Professional quality, up to 10 seconds (longer!)',
|
| 66 |
+
'type': 'text-to-video',
|
| 67 |
+
'duration': '10s'
|
| 68 |
+
}
|
| 69 |
+
}
|
| 70 |
+
})
|
| 71 |
+
|
| 72 |
+
@app.route('/generate-video', methods=['POST'])
|
| 73 |
+
def generate_video():
|
| 74 |
+
try:
|
| 75 |
+
if not REPLICATE_API_TOKEN:
|
| 76 |
+
return jsonify({
|
| 77 |
+
'error': 'Replicate API token not configured. Add REPLICATE_API_TOKEN to .env file'
|
| 78 |
+
}), 500
|
| 79 |
+
|
| 80 |
+
data = request.json
|
| 81 |
+
prompt = data.get('prompt', '')
|
| 82 |
+
model_id = data.get('model', 'hailuo')
|
| 83 |
+
|
| 84 |
+
if not prompt:
|
| 85 |
+
return jsonify({'error': 'Prompt is required'}), 400
|
| 86 |
+
|
| 87 |
+
logger.info(f"Generating video with {model_id}: {prompt[:100]}")
|
| 88 |
+
|
| 89 |
+
# Select model
|
| 90 |
+
model_map = {
|
| 91 |
+
'hailuo': "minimax/video-01", # Real Hailuo model! (6s)
|
| 92 |
+
'cogvideox': "lucataco/cogvideox-5b", # CogVideoX-5B (6s)
|
| 93 |
+
'hunyuan': "tencent/hunyuan-video", # HunyuanVideo (5s+)
|
| 94 |
+
'luma': "fofr/dream-machine", # Luma Dream Machine (5s)
|
| 95 |
+
'runway': "stability-ai/stable-video-diffusion-img2vid-xt", # Runway (10s)
|
| 96 |
+
}
|
| 97 |
+
model_name = model_map.get(model_id, model_map['hailuo'])
|
| 98 |
+
|
| 99 |
+
# Generate video
|
| 100 |
+
output = replicate.run(
|
| 101 |
+
model_name,
|
| 102 |
+
input={"prompt": prompt}
|
| 103 |
+
)
|
| 104 |
+
|
| 105 |
+
# Output is a video URL
|
| 106 |
+
video_url = output if isinstance(output, str) else (output[0] if isinstance(output, list) else str(output))
|
| 107 |
+
|
| 108 |
+
logger.info(f"Video generated successfully: {video_url}")
|
| 109 |
+
|
| 110 |
+
return jsonify({
|
| 111 |
+
'video_url': video_url,
|
| 112 |
+
'prompt': prompt,
|
| 113 |
+
'model': model_id,
|
| 114 |
+
'model_name': 'Hailuo Video-01 (MiniMax)' if model_id == 'hailuo' else 'CogVideoX-5B',
|
| 115 |
+
'timestamp': datetime.now().isoformat(),
|
| 116 |
+
'service': 'Replicate API'
|
| 117 |
+
})
|
| 118 |
+
|
| 119 |
+
except Exception as e:
|
| 120 |
+
logger.error(f"Error: {str(e)}")
|
| 121 |
+
return jsonify({'error': f'Video generation failed: {str(e)}'}), 500
|
| 122 |
+
|
| 123 |
+
if __name__ == '__main__':
|
| 124 |
+
if not REPLICATE_API_TOKEN:
|
| 125 |
+
print("=" * 60)
|
| 126 |
+
print("β οΈ WARNING: REPLICATE_API_TOKEN not set!")
|
| 127 |
+
print("=" * 60)
|
| 128 |
+
print("1. Sign up at: https://replicate.com")
|
| 129 |
+
print("2. Get token from: https://replicate.com/account/api-tokens")
|
| 130 |
+
print("3. Add to .env file: REPLICATE_API_TOKEN=your_token_here")
|
| 131 |
+
print("=" * 60)
|
| 132 |
+
else:
|
| 133 |
+
print("β
Replicate API token configured!")
|
| 134 |
+
|
| 135 |
+
logger.info("Starting Replicate backend on port 5000")
|
| 136 |
+
app.run(host='0.0.0.0', port=5000, debug=False)
|
backend_simple.py
ADDED
|
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Simple backend with multiple fallback options
|
| 3 |
+
Uses less congested models and provides demo mode
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
from flask import Flask, request, jsonify
|
| 7 |
+
from flask_cors import CORS
|
| 8 |
+
import os
|
| 9 |
+
import logging
|
| 10 |
+
from datetime import datetime
|
| 11 |
+
|
| 12 |
+
logging.basicConfig(
|
| 13 |
+
level=logging.INFO,
|
| 14 |
+
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
| 15 |
+
)
|
| 16 |
+
logger = logging.getLogger(__name__)
|
| 17 |
+
|
| 18 |
+
app = Flask(__name__)
|
| 19 |
+
CORS(app)
|
| 20 |
+
|
| 21 |
+
# Sample videos for demo mode
|
| 22 |
+
DEMO_VIDEOS = [
|
| 23 |
+
"https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4",
|
| 24 |
+
"https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ElephantsDream.mp4",
|
| 25 |
+
"https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerBlazes.mp4",
|
| 26 |
+
"https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerEscapes.mp4",
|
| 27 |
+
]
|
| 28 |
+
|
| 29 |
+
@app.route('/health', methods=['GET'])
|
| 30 |
+
def health():
|
| 31 |
+
return jsonify({
|
| 32 |
+
'status': 'healthy',
|
| 33 |
+
'mode': 'demo',
|
| 34 |
+
'message': 'Demo mode - returns sample videos for testing'
|
| 35 |
+
})
|
| 36 |
+
|
| 37 |
+
@app.route('/models', methods=['GET'])
|
| 38 |
+
def list_models():
|
| 39 |
+
return jsonify({
|
| 40 |
+
'models': {
|
| 41 |
+
'demo': {
|
| 42 |
+
'name': 'Demo Mode (Instant)',
|
| 43 |
+
'description': 'Returns sample videos instantly for testing UI',
|
| 44 |
+
'type': 'text-to-video'
|
| 45 |
+
},
|
| 46 |
+
'local': {
|
| 47 |
+
'name': 'Local Generation (Recommended)',
|
| 48 |
+
'description': 'Run CogVideoX locally on your computer',
|
| 49 |
+
'type': 'text-to-video'
|
| 50 |
+
}
|
| 51 |
+
}
|
| 52 |
+
})
|
| 53 |
+
|
| 54 |
+
@app.route('/generate-video', methods=['POST'])
|
| 55 |
+
def generate_video():
|
| 56 |
+
try:
|
| 57 |
+
data = request.json
|
| 58 |
+
prompt = data.get('prompt', '').strip()
|
| 59 |
+
|
| 60 |
+
if not prompt:
|
| 61 |
+
return jsonify({'error': 'Prompt is required'}), 400
|
| 62 |
+
|
| 63 |
+
logger.info(f"Demo mode: Returning sample video for prompt: {prompt[:100]}")
|
| 64 |
+
|
| 65 |
+
# Return a sample video
|
| 66 |
+
import random
|
| 67 |
+
video_url = random.choice(DEMO_VIDEOS)
|
| 68 |
+
|
| 69 |
+
return jsonify({
|
| 70 |
+
'video_url': video_url,
|
| 71 |
+
'prompt': prompt,
|
| 72 |
+
'model': 'demo',
|
| 73 |
+
'model_name': 'Demo Mode (Sample Video)',
|
| 74 |
+
'timestamp': datetime.now().isoformat(),
|
| 75 |
+
'note': 'β οΈ This is a demo video. All online AI services are currently overloaded. Recommendation: Use local generation (backend_local.py) for real AI videos.'
|
| 76 |
+
})
|
| 77 |
+
|
| 78 |
+
except Exception as e:
|
| 79 |
+
logger.error(f"Error: {str(e)}")
|
| 80 |
+
return jsonify({'error': str(e)}), 500
|
| 81 |
+
|
| 82 |
+
if __name__ == '__main__':
|
| 83 |
+
print("=" * 70)
|
| 84 |
+
print("π¬ DEMO MODE - Simple Backend")
|
| 85 |
+
print("=" * 70)
|
| 86 |
+
print("")
|
| 87 |
+
print("β οΈ IMPORTANT: All online AI video services are currently overloaded!")
|
| 88 |
+
print("")
|
| 89 |
+
print("This demo backend returns sample videos to test the UI.")
|
| 90 |
+
print("")
|
| 91 |
+
print("For REAL AI video generation, you have 2 options:")
|
| 92 |
+
print("")
|
| 93 |
+
print("1. π₯οΈ LOCAL GENERATION (Recommended):")
|
| 94 |
+
print(" - Run: python backend_local.py")
|
| 95 |
+
print(" - Open: index_local.html")
|
| 96 |
+
print(" - Free, private, works offline")
|
| 97 |
+
print(" - Takes 30-120s (GPU) or 5-10min (CPU)")
|
| 98 |
+
print("")
|
| 99 |
+
print("2. π° PAID API (Replicate Pro):")
|
| 100 |
+
print(" - Upgrade to Replicate Pro account")
|
| 101 |
+
print(" - Run: python backend_replicate.py")
|
| 102 |
+
print(" - Fast but costs ~$0.05-0.10 per video")
|
| 103 |
+
print("")
|
| 104 |
+
print("=" * 70)
|
| 105 |
+
print("Starting demo server on http://localhost:5000")
|
| 106 |
+
print("=" * 70)
|
| 107 |
+
|
| 108 |
+
app.run(host='0.0.0.0', port=5000, debug=False)
|
index.html
ADDED
|
@@ -0,0 +1,378 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!DOCTYPE html>
|
| 2 |
+
<html lang="en">
|
| 3 |
+
<head>
|
| 4 |
+
<meta charset="UTF-8">
|
| 5 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
| 6 |
+
<title>AI Video Generator - Free Hailuo Clone</title>
|
| 7 |
+
<style>
|
| 8 |
+
* {
|
| 9 |
+
margin: 0;
|
| 10 |
+
padding: 0;
|
| 11 |
+
box-sizing: border-box;
|
| 12 |
+
}
|
| 13 |
+
|
| 14 |
+
body {
|
| 15 |
+
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
| 16 |
+
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
| 17 |
+
min-height: 100vh;
|
| 18 |
+
display: flex;
|
| 19 |
+
justify-content: center;
|
| 20 |
+
align-items: center;
|
| 21 |
+
padding: 20px;
|
| 22 |
+
}
|
| 23 |
+
|
| 24 |
+
.container {
|
| 25 |
+
background: white;
|
| 26 |
+
border-radius: 20px;
|
| 27 |
+
box-shadow: 0 20px 60px rgba(0, 0, 0, 0.3);
|
| 28 |
+
padding: 40px;
|
| 29 |
+
max-width: 600px;
|
| 30 |
+
width: 100%;
|
| 31 |
+
}
|
| 32 |
+
|
| 33 |
+
h1 {
|
| 34 |
+
color: #333;
|
| 35 |
+
margin-bottom: 10px;
|
| 36 |
+
font-size: 28px;
|
| 37 |
+
text-align: center;
|
| 38 |
+
}
|
| 39 |
+
|
| 40 |
+
.subtitle {
|
| 41 |
+
color: #666;
|
| 42 |
+
text-align: center;
|
| 43 |
+
margin-bottom: 30px;
|
| 44 |
+
font-size: 14px;
|
| 45 |
+
}
|
| 46 |
+
|
| 47 |
+
.input-group {
|
| 48 |
+
margin-bottom: 20px;
|
| 49 |
+
}
|
| 50 |
+
|
| 51 |
+
label {
|
| 52 |
+
display: block;
|
| 53 |
+
color: #555;
|
| 54 |
+
margin-bottom: 8px;
|
| 55 |
+
font-weight: 500;
|
| 56 |
+
}
|
| 57 |
+
|
| 58 |
+
#prompt {
|
| 59 |
+
width: 100%;
|
| 60 |
+
padding: 12px 16px;
|
| 61 |
+
border: 2px solid #e0e0e0;
|
| 62 |
+
border-radius: 10px;
|
| 63 |
+
font-size: 16px;
|
| 64 |
+
transition: border-color 0.3s;
|
| 65 |
+
font-family: inherit;
|
| 66 |
+
}
|
| 67 |
+
|
| 68 |
+
#prompt:focus {
|
| 69 |
+
outline: none;
|
| 70 |
+
border-color: #667eea;
|
| 71 |
+
}
|
| 72 |
+
|
| 73 |
+
.char-counter {
|
| 74 |
+
text-align: right;
|
| 75 |
+
font-size: 12px;
|
| 76 |
+
color: #999;
|
| 77 |
+
margin-top: 5px;
|
| 78 |
+
}
|
| 79 |
+
|
| 80 |
+
button {
|
| 81 |
+
width: 100%;
|
| 82 |
+
padding: 14px;
|
| 83 |
+
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
| 84 |
+
color: white;
|
| 85 |
+
border: none;
|
| 86 |
+
border-radius: 10px;
|
| 87 |
+
font-size: 16px;
|
| 88 |
+
font-weight: 600;
|
| 89 |
+
cursor: pointer;
|
| 90 |
+
transition: transform 0.2s, box-shadow 0.2s;
|
| 91 |
+
}
|
| 92 |
+
|
| 93 |
+
button:hover:not(:disabled) {
|
| 94 |
+
transform: translateY(-2px);
|
| 95 |
+
box-shadow: 0 5px 20px rgba(102, 126, 234, 0.4);
|
| 96 |
+
}
|
| 97 |
+
|
| 98 |
+
button:disabled {
|
| 99 |
+
opacity: 0.6;
|
| 100 |
+
cursor: not-allowed;
|
| 101 |
+
}
|
| 102 |
+
|
| 103 |
+
.status {
|
| 104 |
+
margin-top: 20px;
|
| 105 |
+
padding: 12px;
|
| 106 |
+
border-radius: 8px;
|
| 107 |
+
text-align: center;
|
| 108 |
+
font-size: 14px;
|
| 109 |
+
display: none;
|
| 110 |
+
}
|
| 111 |
+
|
| 112 |
+
.status.info {
|
| 113 |
+
background: #e3f2fd;
|
| 114 |
+
color: #1976d2;
|
| 115 |
+
display: block;
|
| 116 |
+
}
|
| 117 |
+
|
| 118 |
+
.status.success {
|
| 119 |
+
background: #e8f5e9;
|
| 120 |
+
color: #388e3c;
|
| 121 |
+
display: block;
|
| 122 |
+
}
|
| 123 |
+
|
| 124 |
+
.status.error {
|
| 125 |
+
background: #ffebee;
|
| 126 |
+
color: #d32f2f;
|
| 127 |
+
display: block;
|
| 128 |
+
}
|
| 129 |
+
|
| 130 |
+
.loader {
|
| 131 |
+
border: 3px solid #f3f3f3;
|
| 132 |
+
border-top: 3px solid #667eea;
|
| 133 |
+
border-radius: 50%;
|
| 134 |
+
width: 30px;
|
| 135 |
+
height: 30px;
|
| 136 |
+
animation: spin 1s linear infinite;
|
| 137 |
+
margin: 20px auto;
|
| 138 |
+
display: none;
|
| 139 |
+
}
|
| 140 |
+
|
| 141 |
+
@keyframes spin {
|
| 142 |
+
0% { transform: rotate(0deg); }
|
| 143 |
+
100% { transform: rotate(360deg); }
|
| 144 |
+
}
|
| 145 |
+
|
| 146 |
+
.video-container {
|
| 147 |
+
margin-top: 20px;
|
| 148 |
+
display: none;
|
| 149 |
+
}
|
| 150 |
+
|
| 151 |
+
video {
|
| 152 |
+
width: 100%;
|
| 153 |
+
border-radius: 10px;
|
| 154 |
+
box-shadow: 0 5px 15px rgba(0, 0, 0, 0.2);
|
| 155 |
+
}
|
| 156 |
+
|
| 157 |
+
.download-btn {
|
| 158 |
+
margin-top: 10px;
|
| 159 |
+
background: linear-gradient(135deg, #11998e 0%, #38ef7d 100%);
|
| 160 |
+
display: none;
|
| 161 |
+
}
|
| 162 |
+
|
| 163 |
+
.example-prompts {
|
| 164 |
+
margin-top: 20px;
|
| 165 |
+
padding: 15px;
|
| 166 |
+
background: #f5f5f5;
|
| 167 |
+
border-radius: 10px;
|
| 168 |
+
}
|
| 169 |
+
|
| 170 |
+
.example-prompts h3 {
|
| 171 |
+
font-size: 14px;
|
| 172 |
+
color: #666;
|
| 173 |
+
margin-bottom: 10px;
|
| 174 |
+
}
|
| 175 |
+
|
| 176 |
+
.example-prompt {
|
| 177 |
+
display: inline-block;
|
| 178 |
+
padding: 6px 12px;
|
| 179 |
+
margin: 4px;
|
| 180 |
+
background: white;
|
| 181 |
+
border: 1px solid #ddd;
|
| 182 |
+
border-radius: 20px;
|
| 183 |
+
font-size: 12px;
|
| 184 |
+
cursor: pointer;
|
| 185 |
+
transition: all 0.2s;
|
| 186 |
+
}
|
| 187 |
+
|
| 188 |
+
.example-prompt:hover {
|
| 189 |
+
background: #667eea;
|
| 190 |
+
color: white;
|
| 191 |
+
border-color: #667eea;
|
| 192 |
+
}
|
| 193 |
+
</style>
|
| 194 |
+
</head>
|
| 195 |
+
<body>
|
| 196 |
+
<div class="container">
|
| 197 |
+
<h1>π¬ AI Video Generator</h1>
|
| 198 |
+
<p class="subtitle">Create amazing videos from text using AI</p>
|
| 199 |
+
|
| 200 |
+
<div class="input-group">
|
| 201 |
+
<label for="prompt">Enter your prompt:</label>
|
| 202 |
+
<textarea
|
| 203 |
+
id="prompt"
|
| 204 |
+
rows="3"
|
| 205 |
+
placeholder="Describe the video you want to create (e.g., A golden retriever running through a field of flowers at sunset)"
|
| 206 |
+
maxlength="500"
|
| 207 |
+
></textarea>
|
| 208 |
+
<div class="char-counter">
|
| 209 |
+
<span id="char-count">0</span>/500 characters
|
| 210 |
+
</div>
|
| 211 |
+
</div>
|
| 212 |
+
|
| 213 |
+
<button id="generate-btn" onclick="generateVideo()">
|
| 214 |
+
Generate Video
|
| 215 |
+
</button>
|
| 216 |
+
|
| 217 |
+
<div class="loader" id="loader"></div>
|
| 218 |
+
<div class="status" id="status"></div>
|
| 219 |
+
|
| 220 |
+
<div class="video-container" id="video-container">
|
| 221 |
+
<video id="video-output" controls></video>
|
| 222 |
+
<button class="download-btn" id="download-btn" onclick="downloadVideo()">
|
| 223 |
+
Download Video
|
| 224 |
+
</button>
|
| 225 |
+
</div>
|
| 226 |
+
|
| 227 |
+
<div class="example-prompts">
|
| 228 |
+
<h3>π‘ Try these examples:</h3>
|
| 229 |
+
<span class="example-prompt" onclick="setPrompt('A dog running in a park')">π Dog in park</span>
|
| 230 |
+
<span class="example-prompt" onclick="setPrompt('Ocean waves crashing on a beach at sunset')">π Ocean sunset</span>
|
| 231 |
+
<span class="example-prompt" onclick="setPrompt('A bird flying through clouds')">π¦
Bird flying</span>
|
| 232 |
+
<span class="example-prompt" onclick="setPrompt('City street with cars at night')">π City night</span>
|
| 233 |
+
</div>
|
| 234 |
+
</div>
|
| 235 |
+
|
| 236 |
+
<script>
|
| 237 |
+
const promptInput = document.getElementById('prompt');
|
| 238 |
+
const charCount = document.getElementById('char-count');
|
| 239 |
+
const generateBtn = document.getElementById('generate-btn');
|
| 240 |
+
const loader = document.getElementById('loader');
|
| 241 |
+
const status = document.getElementById('status');
|
| 242 |
+
const videoContainer = document.getElementById('video-container');
|
| 243 |
+
const videoOutput = document.getElementById('video-output');
|
| 244 |
+
const downloadBtn = document.getElementById('download-btn');
|
| 245 |
+
|
| 246 |
+
let currentVideoUrl = null;
|
| 247 |
+
|
| 248 |
+
// Character counter
|
| 249 |
+
promptInput.addEventListener('input', () => {
|
| 250 |
+
const length = promptInput.value.length;
|
| 251 |
+
charCount.textContent = length;
|
| 252 |
+
|
| 253 |
+
if (length > 450) {
|
| 254 |
+
charCount.style.color = '#d32f2f';
|
| 255 |
+
} else {
|
| 256 |
+
charCount.style.color = '#999';
|
| 257 |
+
}
|
| 258 |
+
});
|
| 259 |
+
|
| 260 |
+
// Enable Enter key to submit (Ctrl+Enter for textarea)
|
| 261 |
+
promptInput.addEventListener('keydown', (e) => {
|
| 262 |
+
if (e.ctrlKey && e.key === 'Enter') {
|
| 263 |
+
generateVideo();
|
| 264 |
+
}
|
| 265 |
+
});
|
| 266 |
+
|
| 267 |
+
function setPrompt(text) {
|
| 268 |
+
promptInput.value = text;
|
| 269 |
+
promptInput.dispatchEvent(new Event('input'));
|
| 270 |
+
}
|
| 271 |
+
|
| 272 |
+
function showStatus(message, type) {
|
| 273 |
+
status.textContent = message;
|
| 274 |
+
status.className = 'status ' + type;
|
| 275 |
+
}
|
| 276 |
+
|
| 277 |
+
function hideStatus() {
|
| 278 |
+
status.style.display = 'none';
|
| 279 |
+
}
|
| 280 |
+
|
| 281 |
+
async function generateVideo() {
|
| 282 |
+
const prompt = promptInput.value.trim();
|
| 283 |
+
|
| 284 |
+
// Validation
|
| 285 |
+
if (!prompt) {
|
| 286 |
+
showStatus('Please enter a prompt', 'error');
|
| 287 |
+
return;
|
| 288 |
+
}
|
| 289 |
+
|
| 290 |
+
if (prompt.length < 3) {
|
| 291 |
+
showStatus('Prompt must be at least 3 characters long', 'error');
|
| 292 |
+
return;
|
| 293 |
+
}
|
| 294 |
+
|
| 295 |
+
// UI updates
|
| 296 |
+
generateBtn.disabled = true;
|
| 297 |
+
generateBtn.textContent = 'Generating...';
|
| 298 |
+
loader.style.display = 'block';
|
| 299 |
+
videoContainer.style.display = 'none';
|
| 300 |
+
downloadBtn.style.display = 'none';
|
| 301 |
+
showStatus('π¨ Creating your video... This may take 10-60 seconds', 'info');
|
| 302 |
+
|
| 303 |
+
try {
|
| 304 |
+
const response = await fetch('http://localhost:5000/generate-video', {
|
| 305 |
+
method: 'POST',
|
| 306 |
+
headers: { 'Content-Type': 'application/json' },
|
| 307 |
+
body: JSON.stringify({ prompt })
|
| 308 |
+
});
|
| 309 |
+
|
| 310 |
+
const data = await response.json();
|
| 311 |
+
|
| 312 |
+
if (!response.ok || data.error) {
|
| 313 |
+
throw new Error(data.error || 'Failed to generate video');
|
| 314 |
+
}
|
| 315 |
+
|
| 316 |
+
// Success
|
| 317 |
+
currentVideoUrl = data.video_url;
|
| 318 |
+
videoOutput.src = currentVideoUrl;
|
| 319 |
+
videoContainer.style.display = 'block';
|
| 320 |
+
downloadBtn.style.display = 'block';
|
| 321 |
+
showStatus('β
Video generated successfully!', 'success');
|
| 322 |
+
|
| 323 |
+
// Auto-play video
|
| 324 |
+
videoOutput.play().catch(() => {
|
| 325 |
+
// Autoplay might be blocked by browser
|
| 326 |
+
});
|
| 327 |
+
|
| 328 |
+
} catch (error) {
|
| 329 |
+
console.error('Error:', error);
|
| 330 |
+
showStatus('β Error: ' + error.message, 'error');
|
| 331 |
+
} finally {
|
| 332 |
+
generateBtn.disabled = false;
|
| 333 |
+
generateBtn.textContent = 'Generate Video';
|
| 334 |
+
loader.style.display = 'none';
|
| 335 |
+
}
|
| 336 |
+
}
|
| 337 |
+
|
| 338 |
+
async function downloadVideo() {
|
| 339 |
+
if (!currentVideoUrl) return;
|
| 340 |
+
|
| 341 |
+
try {
|
| 342 |
+
showStatus('π₯ Preparing download...', 'info');
|
| 343 |
+
|
| 344 |
+
const response = await fetch(currentVideoUrl);
|
| 345 |
+
const blob = await response.blob();
|
| 346 |
+
const url = window.URL.createObjectURL(blob);
|
| 347 |
+
const a = document.createElement('a');
|
| 348 |
+
a.href = url;
|
| 349 |
+
a.download = `ai-video-${Date.now()}.mp4`;
|
| 350 |
+
document.body.appendChild(a);
|
| 351 |
+
a.click();
|
| 352 |
+
window.URL.revokeObjectURL(url);
|
| 353 |
+
document.body.removeChild(a);
|
| 354 |
+
|
| 355 |
+
showStatus('β
Download started!', 'success');
|
| 356 |
+
} catch (error) {
|
| 357 |
+
console.error('Download error:', error);
|
| 358 |
+
showStatus('β Failed to download video', 'error');
|
| 359 |
+
}
|
| 360 |
+
}
|
| 361 |
+
|
| 362 |
+
// Check server health on load
|
| 363 |
+
window.addEventListener('load', async () => {
|
| 364 |
+
try {
|
| 365 |
+
const response = await fetch('http://localhost:5000/health');
|
| 366 |
+
const data = await response.json();
|
| 367 |
+
if (data.status === 'healthy' && data.client_initialized) {
|
| 368 |
+
console.log('β
Server is healthy and ready');
|
| 369 |
+
} else {
|
| 370 |
+
showStatus('β οΈ Server may not be fully initialized', 'error');
|
| 371 |
+
}
|
| 372 |
+
} catch (error) {
|
| 373 |
+
showStatus('β οΈ Cannot connect to server. Make sure backend is running on port 5000', 'error');
|
| 374 |
+
}
|
| 375 |
+
});
|
| 376 |
+
</script>
|
| 377 |
+
</body>
|
| 378 |
+
</html>
|
index_demo.html
ADDED
|
@@ -0,0 +1,453 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!DOCTYPE html>
|
| 2 |
+
<html lang="en">
|
| 3 |
+
<head>
|
| 4 |
+
<meta charset="UTF-8">
|
| 5 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
| 6 |
+
<title>AI Video Generator Pro - Demo Mode</title>
|
| 7 |
+
<style>
|
| 8 |
+
* {
|
| 9 |
+
margin: 0;
|
| 10 |
+
padding: 0;
|
| 11 |
+
box-sizing: border-box;
|
| 12 |
+
}
|
| 13 |
+
|
| 14 |
+
body {
|
| 15 |
+
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
| 16 |
+
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
| 17 |
+
min-height: 100vh;
|
| 18 |
+
padding: 20px;
|
| 19 |
+
}
|
| 20 |
+
|
| 21 |
+
.container {
|
| 22 |
+
max-width: 1200px;
|
| 23 |
+
margin: 0 auto;
|
| 24 |
+
background: white;
|
| 25 |
+
border-radius: 20px;
|
| 26 |
+
box-shadow: 0 20px 60px rgba(0, 0, 0, 0.3);
|
| 27 |
+
overflow: hidden;
|
| 28 |
+
}
|
| 29 |
+
|
| 30 |
+
.header {
|
| 31 |
+
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
| 32 |
+
color: white;
|
| 33 |
+
padding: 30px 40px;
|
| 34 |
+
text-align: center;
|
| 35 |
+
}
|
| 36 |
+
|
| 37 |
+
.header h1 {
|
| 38 |
+
font-size: 32px;
|
| 39 |
+
margin-bottom: 10px;
|
| 40 |
+
}
|
| 41 |
+
|
| 42 |
+
.header p {
|
| 43 |
+
font-size: 16px;
|
| 44 |
+
opacity: 0.9;
|
| 45 |
+
}
|
| 46 |
+
|
| 47 |
+
.demo-notice {
|
| 48 |
+
background: #fff3cd;
|
| 49 |
+
color: #856404;
|
| 50 |
+
padding: 15px;
|
| 51 |
+
text-align: center;
|
| 52 |
+
border-bottom: 2px solid #ffc107;
|
| 53 |
+
font-weight: 500;
|
| 54 |
+
}
|
| 55 |
+
|
| 56 |
+
.main-content {
|
| 57 |
+
display: grid;
|
| 58 |
+
grid-template-columns: 1fr 1fr;
|
| 59 |
+
gap: 30px;
|
| 60 |
+
padding: 40px;
|
| 61 |
+
}
|
| 62 |
+
|
| 63 |
+
.section {
|
| 64 |
+
background: #f8f9fa;
|
| 65 |
+
padding: 20px;
|
| 66 |
+
border-radius: 12px;
|
| 67 |
+
}
|
| 68 |
+
|
| 69 |
+
.section h3 {
|
| 70 |
+
color: #333;
|
| 71 |
+
margin-bottom: 15px;
|
| 72 |
+
font-size: 18px;
|
| 73 |
+
}
|
| 74 |
+
|
| 75 |
+
label {
|
| 76 |
+
display: block;
|
| 77 |
+
color: #555;
|
| 78 |
+
margin-bottom: 8px;
|
| 79 |
+
font-weight: 500;
|
| 80 |
+
font-size: 14px;
|
| 81 |
+
}
|
| 82 |
+
|
| 83 |
+
textarea, select {
|
| 84 |
+
width: 100%;
|
| 85 |
+
padding: 12px;
|
| 86 |
+
border: 2px solid #e0e0e0;
|
| 87 |
+
border-radius: 8px;
|
| 88 |
+
font-size: 14px;
|
| 89 |
+
font-family: inherit;
|
| 90 |
+
transition: border-color 0.3s;
|
| 91 |
+
}
|
| 92 |
+
|
| 93 |
+
textarea {
|
| 94 |
+
resize: vertical;
|
| 95 |
+
min-height: 100px;
|
| 96 |
+
}
|
| 97 |
+
|
| 98 |
+
textarea:focus, select:focus {
|
| 99 |
+
outline: none;
|
| 100 |
+
border-color: #667eea;
|
| 101 |
+
}
|
| 102 |
+
|
| 103 |
+
.options-grid {
|
| 104 |
+
display: grid;
|
| 105 |
+
grid-template-columns: 1fr 1fr;
|
| 106 |
+
gap: 15px;
|
| 107 |
+
margin-top: 15px;
|
| 108 |
+
}
|
| 109 |
+
|
| 110 |
+
button {
|
| 111 |
+
width: 100%;
|
| 112 |
+
padding: 16px;
|
| 113 |
+
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
| 114 |
+
color: white;
|
| 115 |
+
border: none;
|
| 116 |
+
border-radius: 10px;
|
| 117 |
+
font-size: 18px;
|
| 118 |
+
font-weight: 600;
|
| 119 |
+
cursor: pointer;
|
| 120 |
+
transition: transform 0.2s, box-shadow 0.2s;
|
| 121 |
+
margin-top: 20px;
|
| 122 |
+
}
|
| 123 |
+
|
| 124 |
+
button:hover:not(:disabled) {
|
| 125 |
+
transform: translateY(-2px);
|
| 126 |
+
box-shadow: 0 5px 20px rgba(102, 126, 234, 0.4);
|
| 127 |
+
}
|
| 128 |
+
|
| 129 |
+
button:disabled {
|
| 130 |
+
opacity: 0.6;
|
| 131 |
+
cursor: not-allowed;
|
| 132 |
+
}
|
| 133 |
+
|
| 134 |
+
.loader {
|
| 135 |
+
border: 3px solid #f3f3f3;
|
| 136 |
+
border-top: 3px solid #667eea;
|
| 137 |
+
border-radius: 50%;
|
| 138 |
+
width: 40px;
|
| 139 |
+
height: 40px;
|
| 140 |
+
animation: spin 1s linear infinite;
|
| 141 |
+
margin: 20px auto;
|
| 142 |
+
display: none;
|
| 143 |
+
}
|
| 144 |
+
|
| 145 |
+
@keyframes spin {
|
| 146 |
+
0% { transform: rotate(0deg); }
|
| 147 |
+
100% { transform: rotate(360deg); }
|
| 148 |
+
}
|
| 149 |
+
|
| 150 |
+
.status {
|
| 151 |
+
padding: 12px;
|
| 152 |
+
border-radius: 8px;
|
| 153 |
+
text-align: center;
|
| 154 |
+
font-size: 14px;
|
| 155 |
+
display: none;
|
| 156 |
+
margin-top: 15px;
|
| 157 |
+
}
|
| 158 |
+
|
| 159 |
+
.status.info {
|
| 160 |
+
background: #e3f2fd;
|
| 161 |
+
color: #1976d2;
|
| 162 |
+
display: block;
|
| 163 |
+
}
|
| 164 |
+
|
| 165 |
+
.status.success {
|
| 166 |
+
background: #e8f5e9;
|
| 167 |
+
color: #388e3c;
|
| 168 |
+
display: block;
|
| 169 |
+
}
|
| 170 |
+
|
| 171 |
+
.status.error {
|
| 172 |
+
background: #ffebee;
|
| 173 |
+
color: #d32f2f;
|
| 174 |
+
display: block;
|
| 175 |
+
}
|
| 176 |
+
|
| 177 |
+
.video-container {
|
| 178 |
+
background: #000;
|
| 179 |
+
border-radius: 12px;
|
| 180 |
+
overflow: hidden;
|
| 181 |
+
display: none;
|
| 182 |
+
margin-top: 15px;
|
| 183 |
+
}
|
| 184 |
+
|
| 185 |
+
video {
|
| 186 |
+
width: 100%;
|
| 187 |
+
display: block;
|
| 188 |
+
}
|
| 189 |
+
|
| 190 |
+
.video-info {
|
| 191 |
+
background: #f8f9fa;
|
| 192 |
+
padding: 15px;
|
| 193 |
+
margin-top: 10px;
|
| 194 |
+
border-radius: 8px;
|
| 195 |
+
font-size: 13px;
|
| 196 |
+
color: #666;
|
| 197 |
+
}
|
| 198 |
+
|
| 199 |
+
.example-prompts {
|
| 200 |
+
margin-top: 15px;
|
| 201 |
+
}
|
| 202 |
+
|
| 203 |
+
.example-prompt {
|
| 204 |
+
display: inline-block;
|
| 205 |
+
padding: 8px 16px;
|
| 206 |
+
margin: 4px;
|
| 207 |
+
background: white;
|
| 208 |
+
border: 2px solid #667eea;
|
| 209 |
+
border-radius: 20px;
|
| 210 |
+
font-size: 13px;
|
| 211 |
+
cursor: pointer;
|
| 212 |
+
transition: all 0.2s;
|
| 213 |
+
color: #667eea;
|
| 214 |
+
font-weight: 500;
|
| 215 |
+
}
|
| 216 |
+
|
| 217 |
+
.example-prompt:hover {
|
| 218 |
+
background: #667eea;
|
| 219 |
+
color: white;
|
| 220 |
+
}
|
| 221 |
+
|
| 222 |
+
.features-list {
|
| 223 |
+
list-style: none;
|
| 224 |
+
margin-top: 15px;
|
| 225 |
+
}
|
| 226 |
+
|
| 227 |
+
.features-list li {
|
| 228 |
+
padding: 8px 0;
|
| 229 |
+
color: #555;
|
| 230 |
+
}
|
| 231 |
+
|
| 232 |
+
.features-list li:before {
|
| 233 |
+
content: "β ";
|
| 234 |
+
color: #667eea;
|
| 235 |
+
font-weight: bold;
|
| 236 |
+
margin-right: 8px;
|
| 237 |
+
}
|
| 238 |
+
|
| 239 |
+
@media (max-width: 768px) {
|
| 240 |
+
.main-content {
|
| 241 |
+
grid-template-columns: 1fr;
|
| 242 |
+
}
|
| 243 |
+
.options-grid {
|
| 244 |
+
grid-template-columns: 1fr;
|
| 245 |
+
}
|
| 246 |
+
}
|
| 247 |
+
</style>
|
| 248 |
+
</head>
|
| 249 |
+
<body>
|
| 250 |
+
<div class="container">
|
| 251 |
+
<div class="header">
|
| 252 |
+
<h1>π¬ AI Video Generator Pro</h1>
|
| 253 |
+
<p>Hailuo-Inspired Features with Multiple AI Models</p>
|
| 254 |
+
</div>
|
| 255 |
+
|
| 256 |
+
<div class="demo-notice">
|
| 257 |
+
β οΈ DEMO MODE: This demo uses a sample video to showcase the UI. Real video generation requires Hugging Face Space connections.
|
| 258 |
+
</div>
|
| 259 |
+
|
| 260 |
+
<div class="main-content">
|
| 261 |
+
<!-- Left Panel -->
|
| 262 |
+
<div>
|
| 263 |
+
<div class="section">
|
| 264 |
+
<h3>π Enter Your Prompt</h3>
|
| 265 |
+
<label for="prompt">Describe the video you want to create:</label>
|
| 266 |
+
<textarea
|
| 267 |
+
id="prompt"
|
| 268 |
+
rows="4"
|
| 269 |
+
placeholder="e.g., A golden retriever running through a field of sunflowers at sunset"
|
| 270 |
+
>A golden retriever running through a field of flowers</textarea>
|
| 271 |
+
</div>
|
| 272 |
+
|
| 273 |
+
<div class="section">
|
| 274 |
+
<h3>π₯ Advanced Options (Hailuo-Inspired)</h3>
|
| 275 |
+
<div class="options-grid">
|
| 276 |
+
<div>
|
| 277 |
+
<label for="camera">Camera Movement:</label>
|
| 278 |
+
<select id="camera">
|
| 279 |
+
<option value="">Static</option>
|
| 280 |
+
<option value="[Zoom in]">Zoom In</option>
|
| 281 |
+
<option value="[Zoom out]">Zoom Out</option>
|
| 282 |
+
<option value="[Pan left]">Pan Left</option>
|
| 283 |
+
<option value="[Pan right]">Pan Right</option>
|
| 284 |
+
<option value="[Tracking shot]" selected>Tracking Shot</option>
|
| 285 |
+
<option value="[Dolly in]">Dolly In</option>
|
| 286 |
+
</select>
|
| 287 |
+
</div>
|
| 288 |
+
<div>
|
| 289 |
+
<label for="effect">Visual Effect:</label>
|
| 290 |
+
<select id="effect">
|
| 291 |
+
<option value="">None</option>
|
| 292 |
+
<option value="cinematic lighting, film grain" selected>Cinematic</option>
|
| 293 |
+
<option value="dramatic lighting, high contrast">Dramatic</option>
|
| 294 |
+
<option value="golden hour, warm sunset lighting">Golden Hour</option>
|
| 295 |
+
<option value="fog, misty atmosphere">Foggy</option>
|
| 296 |
+
</select>
|
| 297 |
+
</div>
|
| 298 |
+
</div>
|
| 299 |
+
<div style="margin-top: 15px;">
|
| 300 |
+
<label for="style">Video Style:</label>
|
| 301 |
+
<select id="style">
|
| 302 |
+
<option value="">Default</option>
|
| 303 |
+
<option value="photorealistic, 4k, high detail" selected>Realistic</option>
|
| 304 |
+
<option value="anime style, animated">Anime</option>
|
| 305 |
+
<option value="3D render, CGI, Pixar style">3D Render</option>
|
| 306 |
+
<option value="cinematic, movie scene">Cinematic</option>
|
| 307 |
+
</select>
|
| 308 |
+
</div>
|
| 309 |
+
</div>
|
| 310 |
+
|
| 311 |
+
<button onclick="generateVideo()">π¬ Generate Video (Demo)</button>
|
| 312 |
+
</div>
|
| 313 |
+
|
| 314 |
+
<!-- Right Panel -->
|
| 315 |
+
<div>
|
| 316 |
+
<div class="section">
|
| 317 |
+
<h3>ποΈ Generated Video</h3>
|
| 318 |
+
<div class="loader" id="loader"></div>
|
| 319 |
+
<div class="status" id="status"></div>
|
| 320 |
+
|
| 321 |
+
<div class="video-container" id="video-container">
|
| 322 |
+
<video id="video-output" controls></video>
|
| 323 |
+
</div>
|
| 324 |
+
|
| 325 |
+
<div id="video-info" class="video-info" style="display: none;"></div>
|
| 326 |
+
</div>
|
| 327 |
+
|
| 328 |
+
<div class="section">
|
| 329 |
+
<h3>π‘ Example Prompts</h3>
|
| 330 |
+
<div class="example-prompts">
|
| 331 |
+
<span class="example-prompt" onclick="setPrompt('A golden retriever running through a field of flowers')">π Dog in field</span>
|
| 332 |
+
<span class="example-prompt" onclick="setPrompt('Ocean waves crashing on a beach at sunset')">π Ocean sunset</span>
|
| 333 |
+
<span class="example-prompt" onclick="setPrompt('A sports car drifting around a corner')">ποΈ Racing car</span>
|
| 334 |
+
<span class="example-prompt" onclick="setPrompt('A dragon flying over a medieval castle')">π Fantasy dragon</span>
|
| 335 |
+
</div>
|
| 336 |
+
</div>
|
| 337 |
+
|
| 338 |
+
<div class="section">
|
| 339 |
+
<h3>β¨ Features</h3>
|
| 340 |
+
<ul class="features-list">
|
| 341 |
+
<li>5 AI Models (CogVideoX, LTX, SVD, AnimateDiff, Zeroscope)</li>
|
| 342 |
+
<li>Text-to-Video & Image-to-Video</li>
|
| 343 |
+
<li>12 Camera Movements (Hailuo-style)</li>
|
| 344 |
+
<li>8 Visual Effects</li>
|
| 345 |
+
<li>8 Video Styles</li>
|
| 346 |
+
<li>Enhanced Prompt Building</li>
|
| 347 |
+
<li>Professional UI Design</li>
|
| 348 |
+
</ul>
|
| 349 |
+
</div>
|
| 350 |
+
</div>
|
| 351 |
+
</div>
|
| 352 |
+
</div>
|
| 353 |
+
|
| 354 |
+
<script>
|
| 355 |
+
let currentVideoUrl = null;
|
| 356 |
+
|
| 357 |
+
function setPrompt(text) {
|
| 358 |
+
document.getElementById('prompt').value = text;
|
| 359 |
+
}
|
| 360 |
+
|
| 361 |
+
function showStatus(message, type) {
|
| 362 |
+
const status = document.getElementById('status');
|
| 363 |
+
status.textContent = message;
|
| 364 |
+
status.className = 'status ' + type;
|
| 365 |
+
}
|
| 366 |
+
|
| 367 |
+
async function generateVideo() {
|
| 368 |
+
const prompt = document.getElementById('prompt').value.trim();
|
| 369 |
+
const camera = document.getElementById('camera').value;
|
| 370 |
+
const effect = document.getElementById('effect').value;
|
| 371 |
+
const style = document.getElementById('style').value;
|
| 372 |
+
|
| 373 |
+
if (!prompt) {
|
| 374 |
+
showStatus('Please enter a prompt', 'error');
|
| 375 |
+
return;
|
| 376 |
+
}
|
| 377 |
+
|
| 378 |
+
// Build enhanced prompt
|
| 379 |
+
let enhancedPrompt = prompt;
|
| 380 |
+
if (style) enhancedPrompt = style + ', ' + enhancedPrompt;
|
| 381 |
+
if (camera) enhancedPrompt += ' ' + camera;
|
| 382 |
+
if (effect) enhancedPrompt += ', ' + effect;
|
| 383 |
+
|
| 384 |
+
// UI updates
|
| 385 |
+
const btn = document.querySelector('button');
|
| 386 |
+
btn.disabled = true;
|
| 387 |
+
btn.textContent = 'π¬ Generating...';
|
| 388 |
+
document.getElementById('loader').style.display = 'block';
|
| 389 |
+
document.getElementById('video-container').style.display = 'none';
|
| 390 |
+
document.getElementById('video-info').style.display = 'none';
|
| 391 |
+
showStatus('π¨ Generating your video... (Demo mode - instant)', 'info');
|
| 392 |
+
|
| 393 |
+
try {
|
| 394 |
+
// Simulate API delay
|
| 395 |
+
await new Promise(resolve => setTimeout(resolve, 2000));
|
| 396 |
+
|
| 397 |
+
const response = await fetch('http://localhost:5000/test-video', {
|
| 398 |
+
method: 'POST',
|
| 399 |
+
headers: { 'Content-Type': 'application/json' },
|
| 400 |
+
body: JSON.stringify({ prompt: enhancedPrompt })
|
| 401 |
+
});
|
| 402 |
+
|
| 403 |
+
const data = await response.json();
|
| 404 |
+
|
| 405 |
+
if (!response.ok || data.error) {
|
| 406 |
+
throw new Error(data.error || 'Failed to generate video');
|
| 407 |
+
}
|
| 408 |
+
|
| 409 |
+
// Success
|
| 410 |
+
currentVideoUrl = data.video_url;
|
| 411 |
+
const videoOutput = document.getElementById('video-output');
|
| 412 |
+
videoOutput.src = currentVideoUrl;
|
| 413 |
+
document.getElementById('video-container').style.display = 'block';
|
| 414 |
+
|
| 415 |
+
// Show video info
|
| 416 |
+
const videoInfo = document.getElementById('video-info');
|
| 417 |
+
videoInfo.innerHTML = `
|
| 418 |
+
<strong>Mode:</strong> ${data.model_name}<br>
|
| 419 |
+
<strong>Your Prompt:</strong> ${prompt}<br>
|
| 420 |
+
<strong>Enhanced Prompt:</strong> ${data.enhanced_prompt}<br>
|
| 421 |
+
<strong>Note:</strong> ${data.note}
|
| 422 |
+
`;
|
| 423 |
+
videoInfo.style.display = 'block';
|
| 424 |
+
|
| 425 |
+
showStatus('β
Demo video loaded! This showcases the UI. Connect to HF Spaces for real generation.', 'success');
|
| 426 |
+
videoOutput.play().catch(() => {});
|
| 427 |
+
|
| 428 |
+
} catch (error) {
|
| 429 |
+
console.error('Error:', error);
|
| 430 |
+
showStatus('β Error: ' + error.message, 'error');
|
| 431 |
+
} finally {
|
| 432 |
+
btn.disabled = false;
|
| 433 |
+
btn.textContent = 'π¬ Generate Video (Demo)';
|
| 434 |
+
document.getElementById('loader').style.display = 'none';
|
| 435 |
+
}
|
| 436 |
+
}
|
| 437 |
+
|
| 438 |
+
// Check server health on load
|
| 439 |
+
window.addEventListener('load', async () => {
|
| 440 |
+
try {
|
| 441 |
+
const response = await fetch('http://localhost:5000/health');
|
| 442 |
+
const data = await response.json();
|
| 443 |
+
if (data.status === 'healthy') {
|
| 444 |
+
console.log('β
Server is healthy');
|
| 445 |
+
showStatus('β
Server connected! Try the demo by clicking Generate Video', 'success');
|
| 446 |
+
}
|
| 447 |
+
} catch (error) {
|
| 448 |
+
showStatus('β οΈ Cannot connect to server. Make sure backend is running on port 5000', 'error');
|
| 449 |
+
}
|
| 450 |
+
});
|
| 451 |
+
</script>
|
| 452 |
+
</body>
|
| 453 |
+
</html>
|
index_enhanced.html
ADDED
|
@@ -0,0 +1,861 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!DOCTYPE html>
|
| 2 |
+
<html lang="en">
|
| 3 |
+
<head>
|
| 4 |
+
<meta charset="UTF-8">
|
| 5 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
| 6 |
+
<title>AI Video Generator Pro - Hailuo-Inspired</title>
|
| 7 |
+
<style>
|
| 8 |
+
* {
|
| 9 |
+
margin: 0;
|
| 10 |
+
padding: 0;
|
| 11 |
+
box-sizing: border-box;
|
| 12 |
+
}
|
| 13 |
+
|
| 14 |
+
body {
|
| 15 |
+
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
| 16 |
+
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
| 17 |
+
background-attachment: fixed;
|
| 18 |
+
min-height: 100vh;
|
| 19 |
+
padding: 20px;
|
| 20 |
+
animation: gradientShift 15s ease infinite;
|
| 21 |
+
}
|
| 22 |
+
|
| 23 |
+
@keyframes gradientShift {
|
| 24 |
+
0%, 100% { background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); }
|
| 25 |
+
50% { background: linear-gradient(135deg, #764ba2 0%, #667eea 100%); }
|
| 26 |
+
}
|
| 27 |
+
|
| 28 |
+
.container {
|
| 29 |
+
max-width: 1200px;
|
| 30 |
+
margin: 0 auto;
|
| 31 |
+
background: white;
|
| 32 |
+
border-radius: 20px;
|
| 33 |
+
box-shadow: 0 20px 60px rgba(0, 0, 0, 0.3);
|
| 34 |
+
overflow: hidden;
|
| 35 |
+
}
|
| 36 |
+
|
| 37 |
+
.header {
|
| 38 |
+
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
| 39 |
+
color: white;
|
| 40 |
+
padding: 40px 40px;
|
| 41 |
+
text-align: center;
|
| 42 |
+
position: relative;
|
| 43 |
+
overflow: hidden;
|
| 44 |
+
}
|
| 45 |
+
|
| 46 |
+
.header::before {
|
| 47 |
+
content: '';
|
| 48 |
+
position: absolute;
|
| 49 |
+
top: -50%;
|
| 50 |
+
left: -50%;
|
| 51 |
+
width: 200%;
|
| 52 |
+
height: 200%;
|
| 53 |
+
background: radial-gradient(circle, rgba(255,255,255,0.1) 0%, transparent 70%);
|
| 54 |
+
animation: headerPulse 10s ease-in-out infinite;
|
| 55 |
+
}
|
| 56 |
+
|
| 57 |
+
@keyframes headerPulse {
|
| 58 |
+
0%, 100% { transform: translate(0, 0); }
|
| 59 |
+
50% { transform: translate(10%, 10%); }
|
| 60 |
+
}
|
| 61 |
+
|
| 62 |
+
.header h1 {
|
| 63 |
+
font-size: 36px;
|
| 64 |
+
margin-bottom: 10px;
|
| 65 |
+
position: relative;
|
| 66 |
+
z-index: 1;
|
| 67 |
+
text-shadow: 2px 2px 4px rgba(0,0,0,0.2);
|
| 68 |
+
}
|
| 69 |
+
|
| 70 |
+
.header p {
|
| 71 |
+
font-size: 18px;
|
| 72 |
+
opacity: 0.95;
|
| 73 |
+
position: relative;
|
| 74 |
+
z-index: 1;
|
| 75 |
+
}
|
| 76 |
+
|
| 77 |
+
.main-content {
|
| 78 |
+
display: grid;
|
| 79 |
+
grid-template-columns: 1fr 1fr;
|
| 80 |
+
gap: 30px;
|
| 81 |
+
padding: 40px;
|
| 82 |
+
}
|
| 83 |
+
|
| 84 |
+
.left-panel, .right-panel {
|
| 85 |
+
display: flex;
|
| 86 |
+
flex-direction: column;
|
| 87 |
+
gap: 20px;
|
| 88 |
+
}
|
| 89 |
+
|
| 90 |
+
.section {
|
| 91 |
+
background: #f8f9fa;
|
| 92 |
+
padding: 20px;
|
| 93 |
+
border-radius: 12px;
|
| 94 |
+
}
|
| 95 |
+
|
| 96 |
+
.section h3 {
|
| 97 |
+
color: #333;
|
| 98 |
+
margin-bottom: 15px;
|
| 99 |
+
font-size: 18px;
|
| 100 |
+
display: flex;
|
| 101 |
+
align-items: center;
|
| 102 |
+
gap: 8px;
|
| 103 |
+
}
|
| 104 |
+
|
| 105 |
+
label {
|
| 106 |
+
display: block;
|
| 107 |
+
color: #555;
|
| 108 |
+
margin-bottom: 8px;
|
| 109 |
+
font-weight: 500;
|
| 110 |
+
font-size: 14px;
|
| 111 |
+
}
|
| 112 |
+
|
| 113 |
+
textarea, select, input[type="file"] {
|
| 114 |
+
width: 100%;
|
| 115 |
+
padding: 12px;
|
| 116 |
+
border: 2px solid #e0e0e0;
|
| 117 |
+
border-radius: 8px;
|
| 118 |
+
font-size: 14px;
|
| 119 |
+
font-family: inherit;
|
| 120 |
+
transition: border-color 0.3s;
|
| 121 |
+
}
|
| 122 |
+
|
| 123 |
+
textarea {
|
| 124 |
+
resize: vertical;
|
| 125 |
+
min-height: 100px;
|
| 126 |
+
}
|
| 127 |
+
|
| 128 |
+
textarea:focus, select:focus {
|
| 129 |
+
outline: none;
|
| 130 |
+
border-color: #667eea;
|
| 131 |
+
}
|
| 132 |
+
|
| 133 |
+
.char-counter {
|
| 134 |
+
text-align: right;
|
| 135 |
+
font-size: 12px;
|
| 136 |
+
color: #999;
|
| 137 |
+
margin-top: 5px;
|
| 138 |
+
}
|
| 139 |
+
|
| 140 |
+
.options-grid {
|
| 141 |
+
display: grid;
|
| 142 |
+
grid-template-columns: 1fr 1fr;
|
| 143 |
+
gap: 15px;
|
| 144 |
+
}
|
| 145 |
+
|
| 146 |
+
.option-group {
|
| 147 |
+
display: flex;
|
| 148 |
+
flex-direction: column;
|
| 149 |
+
gap: 8px;
|
| 150 |
+
}
|
| 151 |
+
|
| 152 |
+
button {
|
| 153 |
+
padding: 14px 24px;
|
| 154 |
+
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
| 155 |
+
color: white;
|
| 156 |
+
border: none;
|
| 157 |
+
border-radius: 10px;
|
| 158 |
+
font-size: 16px;
|
| 159 |
+
font-weight: 600;
|
| 160 |
+
cursor: pointer;
|
| 161 |
+
transition: all 0.3s ease;
|
| 162 |
+
position: relative;
|
| 163 |
+
overflow: hidden;
|
| 164 |
+
}
|
| 165 |
+
|
| 166 |
+
button::before {
|
| 167 |
+
content: '';
|
| 168 |
+
position: absolute;
|
| 169 |
+
top: 50%;
|
| 170 |
+
left: 50%;
|
| 171 |
+
width: 0;
|
| 172 |
+
height: 0;
|
| 173 |
+
border-radius: 50%;
|
| 174 |
+
background: rgba(255, 255, 255, 0.3);
|
| 175 |
+
transform: translate(-50%, -50%);
|
| 176 |
+
transition: width 0.6s, height 0.6s;
|
| 177 |
+
}
|
| 178 |
+
|
| 179 |
+
button:hover::before {
|
| 180 |
+
width: 300px;
|
| 181 |
+
height: 300px;
|
| 182 |
+
}
|
| 183 |
+
|
| 184 |
+
button:hover:not(:disabled) {
|
| 185 |
+
transform: translateY(-2px);
|
| 186 |
+
box-shadow: 0 8px 25px rgba(102, 126, 234, 0.5);
|
| 187 |
+
}
|
| 188 |
+
|
| 189 |
+
button:active:not(:disabled) {
|
| 190 |
+
transform: translateY(0);
|
| 191 |
+
}
|
| 192 |
+
|
| 193 |
+
button:disabled {
|
| 194 |
+
opacity: 0.6;
|
| 195 |
+
cursor: not-allowed;
|
| 196 |
+
}
|
| 197 |
+
|
| 198 |
+
.generate-btn {
|
| 199 |
+
width: 100%;
|
| 200 |
+
padding: 16px;
|
| 201 |
+
font-size: 18px;
|
| 202 |
+
}
|
| 203 |
+
|
| 204 |
+
.tabs {
|
| 205 |
+
display: flex;
|
| 206 |
+
gap: 10px;
|
| 207 |
+
margin-bottom: 15px;
|
| 208 |
+
}
|
| 209 |
+
|
| 210 |
+
.tab {
|
| 211 |
+
flex: 1;
|
| 212 |
+
padding: 10px;
|
| 213 |
+
background: white;
|
| 214 |
+
border: 2px solid #e0e0e0;
|
| 215 |
+
border-radius: 8px;
|
| 216 |
+
cursor: pointer;
|
| 217 |
+
text-align: center;
|
| 218 |
+
transition: all 0.3s;
|
| 219 |
+
font-weight: 500;
|
| 220 |
+
}
|
| 221 |
+
|
| 222 |
+
.tab.active {
|
| 223 |
+
background: #667eea;
|
| 224 |
+
color: white;
|
| 225 |
+
border-color: #667eea;
|
| 226 |
+
}
|
| 227 |
+
|
| 228 |
+
.tab-content {
|
| 229 |
+
display: none;
|
| 230 |
+
}
|
| 231 |
+
|
| 232 |
+
.tab-content.active {
|
| 233 |
+
display: block;
|
| 234 |
+
}
|
| 235 |
+
|
| 236 |
+
.loader {
|
| 237 |
+
border: 3px solid #f3f3f3;
|
| 238 |
+
border-top: 3px solid #667eea;
|
| 239 |
+
border-radius: 50%;
|
| 240 |
+
width: 40px;
|
| 241 |
+
height: 40px;
|
| 242 |
+
animation: spin 1s linear infinite;
|
| 243 |
+
margin: 20px auto;
|
| 244 |
+
display: none;
|
| 245 |
+
}
|
| 246 |
+
|
| 247 |
+
@keyframes spin {
|
| 248 |
+
0% { transform: rotate(0deg); }
|
| 249 |
+
100% { transform: rotate(360deg); }
|
| 250 |
+
}
|
| 251 |
+
|
| 252 |
+
.status {
|
| 253 |
+
padding: 12px;
|
| 254 |
+
border-radius: 8px;
|
| 255 |
+
text-align: center;
|
| 256 |
+
font-size: 14px;
|
| 257 |
+
display: none;
|
| 258 |
+
margin-top: 15px;
|
| 259 |
+
}
|
| 260 |
+
|
| 261 |
+
.status.info {
|
| 262 |
+
background: #e3f2fd;
|
| 263 |
+
color: #1976d2;
|
| 264 |
+
display: block;
|
| 265 |
+
}
|
| 266 |
+
|
| 267 |
+
.status.success {
|
| 268 |
+
background: #e8f5e9;
|
| 269 |
+
color: #388e3c;
|
| 270 |
+
display: block;
|
| 271 |
+
}
|
| 272 |
+
|
| 273 |
+
.status.error {
|
| 274 |
+
background: #ffebee;
|
| 275 |
+
color: #d32f2f;
|
| 276 |
+
display: block;
|
| 277 |
+
}
|
| 278 |
+
|
| 279 |
+
.video-container {
|
| 280 |
+
background: #000;
|
| 281 |
+
border-radius: 12px;
|
| 282 |
+
overflow: hidden;
|
| 283 |
+
display: none;
|
| 284 |
+
}
|
| 285 |
+
|
| 286 |
+
video {
|
| 287 |
+
width: 100%;
|
| 288 |
+
display: block;
|
| 289 |
+
}
|
| 290 |
+
|
| 291 |
+
.video-info {
|
| 292 |
+
background: #f8f9fa;
|
| 293 |
+
padding: 15px;
|
| 294 |
+
margin-top: 10px;
|
| 295 |
+
border-radius: 8px;
|
| 296 |
+
font-size: 13px;
|
| 297 |
+
color: #666;
|
| 298 |
+
}
|
| 299 |
+
|
| 300 |
+
.video-actions {
|
| 301 |
+
display: flex;
|
| 302 |
+
gap: 10px;
|
| 303 |
+
margin-top: 10px;
|
| 304 |
+
}
|
| 305 |
+
|
| 306 |
+
.video-actions button {
|
| 307 |
+
flex: 1;
|
| 308 |
+
background: linear-gradient(135deg, #11998e 0%, #38ef7d 100%);
|
| 309 |
+
}
|
| 310 |
+
|
| 311 |
+
.example-prompts {
|
| 312 |
+
display: flex;
|
| 313 |
+
flex-direction: column;
|
| 314 |
+
gap: 10px;
|
| 315 |
+
}
|
| 316 |
+
|
| 317 |
+
.prompt-category {
|
| 318 |
+
margin-bottom: 10px;
|
| 319 |
+
}
|
| 320 |
+
|
| 321 |
+
.prompt-category h4 {
|
| 322 |
+
font-size: 13px;
|
| 323 |
+
color: #666;
|
| 324 |
+
margin-bottom: 8px;
|
| 325 |
+
}
|
| 326 |
+
|
| 327 |
+
.example-prompt {
|
| 328 |
+
display: inline-block;
|
| 329 |
+
padding: 8px 16px;
|
| 330 |
+
margin: 6px 4px;
|
| 331 |
+
background: linear-gradient(135deg, #f5f7fa 0%, #c3cfe2 100%);
|
| 332 |
+
border: 2px solid transparent;
|
| 333 |
+
border-radius: 25px;
|
| 334 |
+
font-size: 13px;
|
| 335 |
+
cursor: pointer;
|
| 336 |
+
transition: all 0.3s ease;
|
| 337 |
+
box-shadow: 0 2px 5px rgba(0,0,0,0.1);
|
| 338 |
+
}
|
| 339 |
+
|
| 340 |
+
.example-prompt:hover {
|
| 341 |
+
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
| 342 |
+
color: white;
|
| 343 |
+
border-color: #667eea;
|
| 344 |
+
transform: translateY(-2px);
|
| 345 |
+
box-shadow: 0 4px 12px rgba(102, 126, 234, 0.4);
|
| 346 |
+
}
|
| 347 |
+
|
| 348 |
+
.example-prompt:active {
|
| 349 |
+
transform: translateY(0);
|
| 350 |
+
}
|
| 351 |
+
|
| 352 |
+
.image-upload-area {
|
| 353 |
+
border: 2px dashed #ddd;
|
| 354 |
+
border-radius: 8px;
|
| 355 |
+
padding: 20px;
|
| 356 |
+
text-align: center;
|
| 357 |
+
cursor: pointer;
|
| 358 |
+
transition: all 0.3s;
|
| 359 |
+
}
|
| 360 |
+
|
| 361 |
+
.image-upload-area:hover {
|
| 362 |
+
border-color: #667eea;
|
| 363 |
+
background: #f8f9fa;
|
| 364 |
+
}
|
| 365 |
+
|
| 366 |
+
.image-upload-area.has-image {
|
| 367 |
+
border-style: solid;
|
| 368 |
+
border-color: #667eea;
|
| 369 |
+
}
|
| 370 |
+
|
| 371 |
+
.preview-image {
|
| 372 |
+
max-width: 100%;
|
| 373 |
+
max-height: 200px;
|
| 374 |
+
border-radius: 8px;
|
| 375 |
+
margin-top: 10px;
|
| 376 |
+
}
|
| 377 |
+
|
| 378 |
+
@media (max-width: 768px) {
|
| 379 |
+
.main-content {
|
| 380 |
+
grid-template-columns: 1fr;
|
| 381 |
+
}
|
| 382 |
+
|
| 383 |
+
.options-grid {
|
| 384 |
+
grid-template-columns: 1fr;
|
| 385 |
+
}
|
| 386 |
+
}
|
| 387 |
+
</style>
|
| 388 |
+
</head>
|
| 389 |
+
<body>
|
| 390 |
+
<div class="container">
|
| 391 |
+
<div class="header">
|
| 392 |
+
<h1>π¬ AI Video Generator Pro</h1>
|
| 393 |
+
<p>Create stunning videos with multiple AI models and Hailuo-inspired features</p>
|
| 394 |
+
</div>
|
| 395 |
+
|
| 396 |
+
<div class="main-content">
|
| 397 |
+
<!-- Left Panel: Input & Options -->
|
| 398 |
+
<div class="left-panel">
|
| 399 |
+
<!-- Generation Mode -->
|
| 400 |
+
<div class="section">
|
| 401 |
+
<h3>π― Generation Mode</h3>
|
| 402 |
+
<div class="tabs">
|
| 403 |
+
<div class="tab active" onclick="switchMode('text')">π Text to Video</div>
|
| 404 |
+
<div class="tab" onclick="switchMode('image')">πΌοΈ Image to Video</div>
|
| 405 |
+
</div>
|
| 406 |
+
|
| 407 |
+
<!-- Text to Video Tab -->
|
| 408 |
+
<div id="text-mode" class="tab-content active">
|
| 409 |
+
<label for="prompt">Enter your prompt:</label>
|
| 410 |
+
<textarea
|
| 411 |
+
id="prompt"
|
| 412 |
+
rows="4"
|
| 413 |
+
placeholder="Describe the video you want to create..."
|
| 414 |
+
maxlength="1000"
|
| 415 |
+
></textarea>
|
| 416 |
+
<div class="char-counter">
|
| 417 |
+
<span id="char-count">0</span>/1000 characters
|
| 418 |
+
</div>
|
| 419 |
+
</div>
|
| 420 |
+
|
| 421 |
+
<!-- Image to Video Tab -->
|
| 422 |
+
<div id="image-mode" class="tab-content">
|
| 423 |
+
<label>Upload Image:</label>
|
| 424 |
+
<div class="image-upload-area" id="upload-area" onclick="document.getElementById('image-input').click()">
|
| 425 |
+
<p>π€ Click to upload or drag & drop</p>
|
| 426 |
+
<p style="font-size: 12px; color: #999; margin-top: 5px;">Supports JPG, PNG</p>
|
| 427 |
+
<img id="preview-image" class="preview-image" style="display: none;">
|
| 428 |
+
</div>
|
| 429 |
+
<input type="file" id="image-input" accept="image/*" style="display: none;" onchange="handleImageUpload(event)">
|
| 430 |
+
|
| 431 |
+
<label for="image-prompt" style="margin-top: 15px;">Animation prompt (optional):</label>
|
| 432 |
+
<textarea
|
| 433 |
+
id="image-prompt"
|
| 434 |
+
rows="2"
|
| 435 |
+
placeholder="Describe how you want the image to animate..."
|
| 436 |
+
></textarea>
|
| 437 |
+
</div>
|
| 438 |
+
</div>
|
| 439 |
+
|
| 440 |
+
<!-- Model Selection -->
|
| 441 |
+
<div class="section">
|
| 442 |
+
<h3>π€ AI Model</h3>
|
| 443 |
+
<select id="model-select" onchange="updateModelInfo()">
|
| 444 |
+
<option value="cogvideox-5b">π₯ CogVideoX-5B - Best Quality (6s, 720p)</option>
|
| 445 |
+
<option value="cogvideox-2b">β‘ CogVideoX-2B - Faster Version</option>
|
| 446 |
+
<option value="hunyuan-video">π HunyuanVideo - SOTA by Tencent</option>
|
| 447 |
+
<option value="stable-video-diffusion">πΌοΈ Stable Video Diffusion - Image Animation</option>
|
| 448 |
+
<option value="demo">π‘ Demo Mode - Test UI (Instant)</option>
|
| 449 |
+
</select>
|
| 450 |
+
<div id="model-info" style="margin-top: 10px; font-size: 13px; color: #666;"></div>
|
| 451 |
+
</div>
|
| 452 |
+
|
| 453 |
+
<!-- Advanced Options (Hailuo-inspired) -->
|
| 454 |
+
<div class="section">
|
| 455 |
+
<h3>π₯ Advanced Options</h3>
|
| 456 |
+
<div class="options-grid">
|
| 457 |
+
<div class="option-group">
|
| 458 |
+
<label for="camera-select">Camera Movement:</label>
|
| 459 |
+
<select id="camera-select">
|
| 460 |
+
<option value="">Static (No movement)</option>
|
| 461 |
+
</select>
|
| 462 |
+
</div>
|
| 463 |
+
|
| 464 |
+
<div class="option-group">
|
| 465 |
+
<label for="effect-select">Visual Effects:</label>
|
| 466 |
+
<select id="effect-select">
|
| 467 |
+
<option value="">None</option>
|
| 468 |
+
</select>
|
| 469 |
+
</div>
|
| 470 |
+
|
| 471 |
+
<div class="option-group">
|
| 472 |
+
<label for="style-select">Video Style:</label>
|
| 473 |
+
<select id="style-select">
|
| 474 |
+
<option value="">Default</option>
|
| 475 |
+
</select>
|
| 476 |
+
</div>
|
| 477 |
+
</div>
|
| 478 |
+
</div>
|
| 479 |
+
|
| 480 |
+
<!-- Generate Button -->
|
| 481 |
+
<button class="generate-btn" onclick="generateVideo()">
|
| 482 |
+
π¬ Generate Video
|
| 483 |
+
</button>
|
| 484 |
+
</div>
|
| 485 |
+
|
| 486 |
+
<!-- Right Panel: Output & Examples -->
|
| 487 |
+
<div class="right-panel">
|
| 488 |
+
<!-- Video Output -->
|
| 489 |
+
<div class="section">
|
| 490 |
+
<h3>ποΈ Generated Video</h3>
|
| 491 |
+
<div class="loader" id="loader"></div>
|
| 492 |
+
<div class="status" id="status"></div>
|
| 493 |
+
|
| 494 |
+
<div class="video-container" id="video-container">
|
| 495 |
+
<video id="video-output" controls></video>
|
| 496 |
+
</div>
|
| 497 |
+
|
| 498 |
+
<div id="video-info" class="video-info" style="display: none;"></div>
|
| 499 |
+
|
| 500 |
+
<div class="video-actions" id="video-actions" style="display: none;">
|
| 501 |
+
<button onclick="downloadVideo()">π₯ Download</button>
|
| 502 |
+
<button onclick="shareVideo()">π Share</button>
|
| 503 |
+
</div>
|
| 504 |
+
</div>
|
| 505 |
+
|
| 506 |
+
<!-- Example Prompts -->
|
| 507 |
+
<div class="section">
|
| 508 |
+
<h3>π‘ Example Prompts</h3>
|
| 509 |
+
<div class="example-prompts" id="example-prompts">
|
| 510 |
+
<!-- Will be populated by JavaScript -->
|
| 511 |
+
</div>
|
| 512 |
+
</div>
|
| 513 |
+
</div>
|
| 514 |
+
</div>
|
| 515 |
+
</div>
|
| 516 |
+
|
| 517 |
+
<script>
|
| 518 |
+
// Global state
|
| 519 |
+
let currentVideoUrl = null;
|
| 520 |
+
let currentMode = 'text';
|
| 521 |
+
let uploadedImage = null;
|
| 522 |
+
let modelsData = null;
|
| 523 |
+
|
| 524 |
+
// Initialize app
|
| 525 |
+
window.addEventListener('load', async () => {
|
| 526 |
+
await loadModelsData();
|
| 527 |
+
checkServerHealth();
|
| 528 |
+
setupEventListeners();
|
| 529 |
+
});
|
| 530 |
+
|
| 531 |
+
function setupEventListeners() {
|
| 532 |
+
const promptInput = document.getElementById('prompt');
|
| 533 |
+
const charCount = document.getElementById('char-count');
|
| 534 |
+
|
| 535 |
+
promptInput.addEventListener('input', () => {
|
| 536 |
+
const length = promptInput.value.length;
|
| 537 |
+
charCount.textContent = length;
|
| 538 |
+
charCount.style.color = length > 900 ? '#d32f2f' : '#999';
|
| 539 |
+
});
|
| 540 |
+
|
| 541 |
+
promptInput.addEventListener('keydown', (e) => {
|
| 542 |
+
if (e.ctrlKey && e.key === 'Enter') {
|
| 543 |
+
generateVideo();
|
| 544 |
+
}
|
| 545 |
+
});
|
| 546 |
+
}
|
| 547 |
+
|
| 548 |
+
async function loadModelsData() {
|
| 549 |
+
try {
|
| 550 |
+
const response = await fetch('/api/models');
|
| 551 |
+
modelsData = await response.json();
|
| 552 |
+
|
| 553 |
+
populateCameraMovements();
|
| 554 |
+
populateVisualEffects();
|
| 555 |
+
populateVideoStyles();
|
| 556 |
+
populateExamplePrompts();
|
| 557 |
+
updateModelInfo();
|
| 558 |
+
} catch (error) {
|
| 559 |
+
console.error('Failed to load models data:', error);
|
| 560 |
+
// Use fallback data if backend is not available
|
| 561 |
+
useFallbackData();
|
| 562 |
+
}
|
| 563 |
+
}
|
| 564 |
+
|
| 565 |
+
function useFallbackData() {
|
| 566 |
+
// Fallback example prompts
|
| 567 |
+
const fallbackPrompts = {
|
| 568 |
+
"Nature": [
|
| 569 |
+
"A majestic waterfall cascading down mossy rocks in a lush rainforest",
|
| 570 |
+
"Ocean waves crashing on a rocky shore at sunset with seagulls flying",
|
| 571 |
+
"A field of sunflowers swaying in the breeze under a blue sky"
|
| 572 |
+
],
|
| 573 |
+
"Animals": [
|
| 574 |
+
"A golden retriever running through a field of flowers at sunset",
|
| 575 |
+
"A majestic eagle soaring through clouds above mountain peaks",
|
| 576 |
+
"A playful dolphin jumping out of crystal clear ocean water"
|
| 577 |
+
],
|
| 578 |
+
"Urban": [
|
| 579 |
+
"City street with cars and pedestrians at night, neon lights reflecting on wet pavement",
|
| 580 |
+
"Time-lapse of clouds moving over modern skyscrapers in downtown",
|
| 581 |
+
"A busy coffee shop with people working on laptops, warm lighting"
|
| 582 |
+
],
|
| 583 |
+
"Fantasy": [
|
| 584 |
+
"A magical portal opening in an ancient forest with glowing particles",
|
| 585 |
+
"A dragon flying over a medieval castle at dawn",
|
| 586 |
+
"Floating islands in the sky connected by glowing bridges"
|
| 587 |
+
]
|
| 588 |
+
};
|
| 589 |
+
|
| 590 |
+
const container = document.getElementById('example-prompts');
|
| 591 |
+
Object.entries(fallbackPrompts).forEach(([category, prompts]) => {
|
| 592 |
+
const categoryDiv = document.createElement('div');
|
| 593 |
+
categoryDiv.className = 'prompt-category';
|
| 594 |
+
|
| 595 |
+
const title = document.createElement('h4');
|
| 596 |
+
title.textContent = category;
|
| 597 |
+
categoryDiv.appendChild(title);
|
| 598 |
+
|
| 599 |
+
prompts.forEach(prompt => {
|
| 600 |
+
const span = document.createElement('span');
|
| 601 |
+
span.className = 'example-prompt';
|
| 602 |
+
span.textContent = prompt.substring(0, 50) + (prompt.length > 50 ? '...' : '');
|
| 603 |
+
span.title = prompt;
|
| 604 |
+
span.onclick = () => setPrompt(prompt);
|
| 605 |
+
categoryDiv.appendChild(span);
|
| 606 |
+
});
|
| 607 |
+
|
| 608 |
+
container.appendChild(categoryDiv);
|
| 609 |
+
});
|
| 610 |
+
}
|
| 611 |
+
|
| 612 |
+
function populateCameraMovements() {
|
| 613 |
+
const select = document.getElementById('camera-select');
|
| 614 |
+
modelsData.camera_movements.forEach(movement => {
|
| 615 |
+
const option = document.createElement('option');
|
| 616 |
+
option.value = movement.tag;
|
| 617 |
+
option.textContent = `${movement.name} - ${movement.description}`;
|
| 618 |
+
select.appendChild(option);
|
| 619 |
+
});
|
| 620 |
+
}
|
| 621 |
+
|
| 622 |
+
function populateVisualEffects() {
|
| 623 |
+
const select = document.getElementById('effect-select');
|
| 624 |
+
modelsData.visual_effects.forEach(effect => {
|
| 625 |
+
const option = document.createElement('option');
|
| 626 |
+
option.value = effect.tag;
|
| 627 |
+
option.textContent = `${effect.name} - ${effect.description}`;
|
| 628 |
+
select.appendChild(option);
|
| 629 |
+
});
|
| 630 |
+
}
|
| 631 |
+
|
| 632 |
+
function populateVideoStyles() {
|
| 633 |
+
const select = document.getElementById('style-select');
|
| 634 |
+
modelsData.video_styles.forEach(style => {
|
| 635 |
+
const option = document.createElement('option');
|
| 636 |
+
option.value = style.tag;
|
| 637 |
+
option.textContent = `${style.name} - ${style.description}`;
|
| 638 |
+
select.appendChild(option);
|
| 639 |
+
});
|
| 640 |
+
}
|
| 641 |
+
|
| 642 |
+
function populateExamplePrompts() {
|
| 643 |
+
const container = document.getElementById('example-prompts');
|
| 644 |
+
Object.entries(modelsData.example_prompts).forEach(([category, prompts]) => {
|
| 645 |
+
const categoryDiv = document.createElement('div');
|
| 646 |
+
categoryDiv.className = 'prompt-category';
|
| 647 |
+
|
| 648 |
+
const title = document.createElement('h4');
|
| 649 |
+
title.textContent = category;
|
| 650 |
+
categoryDiv.appendChild(title);
|
| 651 |
+
|
| 652 |
+
prompts.forEach(prompt => {
|
| 653 |
+
const span = document.createElement('span');
|
| 654 |
+
span.className = 'example-prompt';
|
| 655 |
+
span.textContent = prompt.substring(0, 50) + (prompt.length > 50 ? '...' : '');
|
| 656 |
+
span.title = prompt;
|
| 657 |
+
span.onclick = () => setPrompt(prompt);
|
| 658 |
+
categoryDiv.appendChild(span);
|
| 659 |
+
});
|
| 660 |
+
|
| 661 |
+
container.appendChild(categoryDiv);
|
| 662 |
+
});
|
| 663 |
+
}
|
| 664 |
+
|
| 665 |
+
function updateModelInfo() {
|
| 666 |
+
const modelId = document.getElementById('model-select').value;
|
| 667 |
+
const model = modelsData.models[modelId];
|
| 668 |
+
const infoDiv = document.getElementById('model-info');
|
| 669 |
+
infoDiv.textContent = `${model.description} | Type: ${model.type}`;
|
| 670 |
+
}
|
| 671 |
+
|
| 672 |
+
function switchMode(mode) {
|
| 673 |
+
currentMode = mode;
|
| 674 |
+
|
| 675 |
+
// Update tabs
|
| 676 |
+
document.querySelectorAll('.tab').forEach(tab => tab.classList.remove('active'));
|
| 677 |
+
event.target.classList.add('active');
|
| 678 |
+
|
| 679 |
+
// Update content
|
| 680 |
+
document.querySelectorAll('.tab-content').forEach(content => content.classList.remove('active'));
|
| 681 |
+
document.getElementById(`${mode}-mode`).classList.add('active');
|
| 682 |
+
|
| 683 |
+
// Update model selection for image mode
|
| 684 |
+
if (mode === 'image') {
|
| 685 |
+
const modelSelect = document.getElementById('model-select');
|
| 686 |
+
modelSelect.value = 'stable-video-diffusion';
|
| 687 |
+
updateModelInfo();
|
| 688 |
+
}
|
| 689 |
+
}
|
| 690 |
+
|
| 691 |
+
function handleImageUpload(event) {
|
| 692 |
+
const file = event.target.files[0];
|
| 693 |
+
if (!file) return;
|
| 694 |
+
|
| 695 |
+
const reader = new FileReader();
|
| 696 |
+
reader.onload = (e) => {
|
| 697 |
+
uploadedImage = e.target.result;
|
| 698 |
+
const preview = document.getElementById('preview-image');
|
| 699 |
+
preview.src = uploadedImage;
|
| 700 |
+
preview.style.display = 'block';
|
| 701 |
+
document.getElementById('upload-area').classList.add('has-image');
|
| 702 |
+
};
|
| 703 |
+
reader.readAsDataURL(file);
|
| 704 |
+
}
|
| 705 |
+
|
| 706 |
+
function setPrompt(text) {
|
| 707 |
+
document.getElementById('prompt').value = text;
|
| 708 |
+
document.getElementById('prompt').dispatchEvent(new Event('input'));
|
| 709 |
+
}
|
| 710 |
+
|
| 711 |
+
function showStatus(message, type) {
|
| 712 |
+
const status = document.getElementById('status');
|
| 713 |
+
status.textContent = message;
|
| 714 |
+
status.className = 'status ' + type;
|
| 715 |
+
}
|
| 716 |
+
|
| 717 |
+
async function generateVideo() {
|
| 718 |
+
if (currentMode === 'text') {
|
| 719 |
+
await generateTextToVideo();
|
| 720 |
+
} else {
|
| 721 |
+
await generateImageToVideo();
|
| 722 |
+
}
|
| 723 |
+
}
|
| 724 |
+
|
| 725 |
+
async function generateTextToVideo() {
|
| 726 |
+
const prompt = document.getElementById('prompt').value.trim();
|
| 727 |
+
const model = document.getElementById('model-select').value;
|
| 728 |
+
const cameraMovement = document.getElementById('camera-select').value;
|
| 729 |
+
const visualEffect = document.getElementById('effect-select').value;
|
| 730 |
+
const style = document.getElementById('style-select').value;
|
| 731 |
+
|
| 732 |
+
if (!prompt) {
|
| 733 |
+
showStatus('Please enter a prompt', 'error');
|
| 734 |
+
return;
|
| 735 |
+
}
|
| 736 |
+
|
| 737 |
+
if (prompt.length < 3) {
|
| 738 |
+
showStatus('Prompt must be at least 3 characters long', 'error');
|
| 739 |
+
return;
|
| 740 |
+
}
|
| 741 |
+
|
| 742 |
+
// UI updates
|
| 743 |
+
const generateBtn = document.querySelector('.generate-btn');
|
| 744 |
+
generateBtn.disabled = true;
|
| 745 |
+
generateBtn.textContent = 'π¬ Generating...';
|
| 746 |
+
document.getElementById('loader').style.display = 'block';
|
| 747 |
+
document.getElementById('video-container').style.display = 'none';
|
| 748 |
+
document.getElementById('video-actions').style.display = 'none';
|
| 749 |
+
document.getElementById('video-info').style.display = 'none';
|
| 750 |
+
showStatus('π¨ Creating your video... This may take 30-120 seconds depending on the model', 'info');
|
| 751 |
+
|
| 752 |
+
try {
|
| 753 |
+
const response = await fetch('/api/generate-video', {
|
| 754 |
+
method: 'POST',
|
| 755 |
+
headers: { 'Content-Type': 'application/json' },
|
| 756 |
+
body: JSON.stringify({
|
| 757 |
+
prompt,
|
| 758 |
+
// Map enhanced UI model ids to Replicate API models
|
| 759 |
+
model: model === 'cogvideox-5b' || model === 'cogvideox-2b' ? 'cogvideox' : (model === 'demo' ? 'demo' : 'hailuo'),
|
| 760 |
+
camera_movement: cameraMovement,
|
| 761 |
+
visual_effect: visualEffect,
|
| 762 |
+
style: style
|
| 763 |
+
})
|
| 764 |
+
});
|
| 765 |
+
|
| 766 |
+
const data = await response.json();
|
| 767 |
+
|
| 768 |
+
if (!response.ok || data.error) {
|
| 769 |
+
throw new Error(data.error || 'Failed to generate video');
|
| 770 |
+
}
|
| 771 |
+
|
| 772 |
+
// Success
|
| 773 |
+
currentVideoUrl = data.video_url;
|
| 774 |
+
const videoOutput = document.getElementById('video-output');
|
| 775 |
+
videoOutput.src = currentVideoUrl;
|
| 776 |
+
document.getElementById('video-container').style.display = 'block';
|
| 777 |
+
document.getElementById('video-actions').style.display = 'flex';
|
| 778 |
+
|
| 779 |
+
// Show video info
|
| 780 |
+
const videoInfo = document.getElementById('video-info');
|
| 781 |
+
videoInfo.innerHTML = `
|
| 782 |
+
<strong>Model:</strong> ${data.model_name}<br>
|
| 783 |
+
<strong>Prompt:</strong> ${data.prompt}<br>
|
| 784 |
+
${data.enhanced_prompt !== data.prompt ? `<strong>Enhanced:</strong> ${data.enhanced_prompt}<br>` : ''}
|
| 785 |
+
<strong>Generated:</strong> ${new Date(data.timestamp).toLocaleString()}
|
| 786 |
+
`;
|
| 787 |
+
videoInfo.style.display = 'block';
|
| 788 |
+
|
| 789 |
+
showStatus('β
Video generated successfully!', 'success');
|
| 790 |
+
videoOutput.play().catch(() => {});
|
| 791 |
+
|
| 792 |
+
} catch (error) {
|
| 793 |
+
console.error('Error:', error);
|
| 794 |
+
showStatus('β Error: ' + error.message, 'error');
|
| 795 |
+
} finally {
|
| 796 |
+
generateBtn.disabled = false;
|
| 797 |
+
generateBtn.textContent = 'π¬ Generate Video';
|
| 798 |
+
document.getElementById('loader').style.display = 'none';
|
| 799 |
+
}
|
| 800 |
+
}
|
| 801 |
+
|
| 802 |
+
async function generateImageToVideo() {
|
| 803 |
+
// Image-to-video not supported in the Vercel serverless API yet
|
| 804 |
+
showStatus('β οΈ Image-to-Video is not available in this hosted version. Use Local mode for this feature.', 'error');
|
| 805 |
+
}
|
| 806 |
+
|
| 807 |
+
async function downloadVideo() {
|
| 808 |
+
if (!currentVideoUrl) return;
|
| 809 |
+
|
| 810 |
+
try {
|
| 811 |
+
showStatus('π₯ Preparing download...', 'info');
|
| 812 |
+
|
| 813 |
+
const response = await fetch(currentVideoUrl);
|
| 814 |
+
const blob = await response.blob();
|
| 815 |
+
const url = window.URL.createObjectURL(blob);
|
| 816 |
+
const a = document.createElement('a');
|
| 817 |
+
a.href = url;
|
| 818 |
+
a.download = `ai-video-${Date.now()}.mp4`;
|
| 819 |
+
document.body.appendChild(a);
|
| 820 |
+
a.click();
|
| 821 |
+
window.URL.revokeObjectURL(url);
|
| 822 |
+
document.body.removeChild(a);
|
| 823 |
+
|
| 824 |
+
showStatus('β
Download started!', 'success');
|
| 825 |
+
} catch (error) {
|
| 826 |
+
console.error('Download error:', error);
|
| 827 |
+
showStatus('β Failed to download video', 'error');
|
| 828 |
+
}
|
| 829 |
+
}
|
| 830 |
+
|
| 831 |
+
function shareVideo() {
|
| 832 |
+
if (!currentVideoUrl) return;
|
| 833 |
+
|
| 834 |
+
if (navigator.share) {
|
| 835 |
+
navigator.share({
|
| 836 |
+
title: 'AI Generated Video',
|
| 837 |
+
text: 'Check out this AI-generated video!',
|
| 838 |
+
url: currentVideoUrl
|
| 839 |
+
}).catch(err => console.log('Error sharing:', err));
|
| 840 |
+
} else {
|
| 841 |
+
// Fallback: copy to clipboard
|
| 842 |
+
navigator.clipboard.writeText(currentVideoUrl).then(() => {
|
| 843 |
+
showStatus('π Video URL copied to clipboard!', 'success');
|
| 844 |
+
});
|
| 845 |
+
}
|
| 846 |
+
}
|
| 847 |
+
|
| 848 |
+
async function checkServerHealth() {
|
| 849 |
+
try {
|
| 850 |
+
const response = await fetch('/api/health');
|
| 851 |
+
const data = await response.json();
|
| 852 |
+
if (data.status === 'healthy') {
|
| 853 |
+
console.log('β
API is healthy');
|
| 854 |
+
}
|
| 855 |
+
} catch (error) {
|
| 856 |
+
showStatus('β οΈ Cannot connect to API. If deploying on Vercel, ensure functions are enabled.', 'error');
|
| 857 |
+
}
|
| 858 |
+
}
|
| 859 |
+
</script>
|
| 860 |
+
</body>
|
| 861 |
+
</html>
|
index_local.html
ADDED
|
@@ -0,0 +1,567 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!DOCTYPE html>
|
| 2 |
+
<html lang="en">
|
| 3 |
+
<head>
|
| 4 |
+
<meta charset="UTF-8">
|
| 5 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
| 6 |
+
<title>Local AI Video Generator - CogVideoX</title>
|
| 7 |
+
<style>
|
| 8 |
+
* {
|
| 9 |
+
margin: 0;
|
| 10 |
+
padding: 0;
|
| 11 |
+
box-sizing: border-box;
|
| 12 |
+
}
|
| 13 |
+
|
| 14 |
+
body {
|
| 15 |
+
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
| 16 |
+
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
| 17 |
+
min-height: 100vh;
|
| 18 |
+
padding: 20px;
|
| 19 |
+
}
|
| 20 |
+
|
| 21 |
+
.container {
|
| 22 |
+
max-width: 900px;
|
| 23 |
+
margin: 0 auto;
|
| 24 |
+
background: white;
|
| 25 |
+
border-radius: 20px;
|
| 26 |
+
box-shadow: 0 20px 60px rgba(0, 0, 0, 0.3);
|
| 27 |
+
overflow: hidden;
|
| 28 |
+
}
|
| 29 |
+
|
| 30 |
+
.header {
|
| 31 |
+
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
| 32 |
+
color: white;
|
| 33 |
+
padding: 30px 40px;
|
| 34 |
+
text-align: center;
|
| 35 |
+
}
|
| 36 |
+
|
| 37 |
+
.header h1 {
|
| 38 |
+
font-size: 32px;
|
| 39 |
+
margin-bottom: 10px;
|
| 40 |
+
}
|
| 41 |
+
|
| 42 |
+
.header p {
|
| 43 |
+
font-size: 16px;
|
| 44 |
+
opacity: 0.9;
|
| 45 |
+
}
|
| 46 |
+
|
| 47 |
+
.system-info {
|
| 48 |
+
background: rgba(255, 255, 255, 0.1);
|
| 49 |
+
padding: 10px;
|
| 50 |
+
border-radius: 8px;
|
| 51 |
+
margin-top: 15px;
|
| 52 |
+
font-size: 13px;
|
| 53 |
+
}
|
| 54 |
+
|
| 55 |
+
.main-content {
|
| 56 |
+
padding: 40px;
|
| 57 |
+
}
|
| 58 |
+
|
| 59 |
+
.section {
|
| 60 |
+
background: #f8f9fa;
|
| 61 |
+
padding: 20px;
|
| 62 |
+
border-radius: 12px;
|
| 63 |
+
margin-bottom: 20px;
|
| 64 |
+
}
|
| 65 |
+
|
| 66 |
+
.section h3 {
|
| 67 |
+
color: #333;
|
| 68 |
+
margin-bottom: 15px;
|
| 69 |
+
font-size: 18px;
|
| 70 |
+
display: flex;
|
| 71 |
+
align-items: center;
|
| 72 |
+
gap: 8px;
|
| 73 |
+
}
|
| 74 |
+
|
| 75 |
+
label {
|
| 76 |
+
display: block;
|
| 77 |
+
color: #555;
|
| 78 |
+
margin-bottom: 8px;
|
| 79 |
+
font-weight: 500;
|
| 80 |
+
font-size: 14px;
|
| 81 |
+
}
|
| 82 |
+
|
| 83 |
+
textarea {
|
| 84 |
+
width: 100%;
|
| 85 |
+
padding: 12px;
|
| 86 |
+
border: 2px solid #e0e0e0;
|
| 87 |
+
border-radius: 8px;
|
| 88 |
+
font-size: 14px;
|
| 89 |
+
font-family: inherit;
|
| 90 |
+
transition: border-color 0.3s;
|
| 91 |
+
resize: vertical;
|
| 92 |
+
min-height: 100px;
|
| 93 |
+
}
|
| 94 |
+
|
| 95 |
+
textarea:focus {
|
| 96 |
+
outline: none;
|
| 97 |
+
border-color: #667eea;
|
| 98 |
+
}
|
| 99 |
+
|
| 100 |
+
.char-counter {
|
| 101 |
+
text-align: right;
|
| 102 |
+
font-size: 12px;
|
| 103 |
+
color: #999;
|
| 104 |
+
margin-top: 5px;
|
| 105 |
+
}
|
| 106 |
+
|
| 107 |
+
button {
|
| 108 |
+
padding: 14px 24px;
|
| 109 |
+
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
| 110 |
+
color: white;
|
| 111 |
+
border: none;
|
| 112 |
+
border-radius: 10px;
|
| 113 |
+
font-size: 16px;
|
| 114 |
+
font-weight: 600;
|
| 115 |
+
cursor: pointer;
|
| 116 |
+
transition: transform 0.2s, box-shadow 0.2s;
|
| 117 |
+
}
|
| 118 |
+
|
| 119 |
+
button:hover:not(:disabled) {
|
| 120 |
+
transform: translateY(-2px);
|
| 121 |
+
box-shadow: 0 5px 20px rgba(102, 126, 234, 0.4);
|
| 122 |
+
}
|
| 123 |
+
|
| 124 |
+
button:disabled {
|
| 125 |
+
opacity: 0.6;
|
| 126 |
+
cursor: not-allowed;
|
| 127 |
+
}
|
| 128 |
+
|
| 129 |
+
.generate-btn {
|
| 130 |
+
width: 100%;
|
| 131 |
+
padding: 16px;
|
| 132 |
+
font-size: 18px;
|
| 133 |
+
}
|
| 134 |
+
|
| 135 |
+
.loader {
|
| 136 |
+
border: 3px solid #f3f3f3;
|
| 137 |
+
border-top: 3px solid #667eea;
|
| 138 |
+
border-radius: 50%;
|
| 139 |
+
width: 40px;
|
| 140 |
+
height: 40px;
|
| 141 |
+
animation: spin 1s linear infinite;
|
| 142 |
+
margin: 20px auto;
|
| 143 |
+
display: none;
|
| 144 |
+
}
|
| 145 |
+
|
| 146 |
+
@keyframes spin {
|
| 147 |
+
0% { transform: rotate(0deg); }
|
| 148 |
+
100% { transform: rotate(360deg); }
|
| 149 |
+
}
|
| 150 |
+
|
| 151 |
+
.status {
|
| 152 |
+
padding: 12px;
|
| 153 |
+
border-radius: 8px;
|
| 154 |
+
text-align: center;
|
| 155 |
+
font-size: 14px;
|
| 156 |
+
display: none;
|
| 157 |
+
margin-top: 15px;
|
| 158 |
+
}
|
| 159 |
+
|
| 160 |
+
.status.info {
|
| 161 |
+
background: #e3f2fd;
|
| 162 |
+
color: #1976d2;
|
| 163 |
+
display: block;
|
| 164 |
+
}
|
| 165 |
+
|
| 166 |
+
.status.success {
|
| 167 |
+
background: #e8f5e9;
|
| 168 |
+
color: #388e3c;
|
| 169 |
+
display: block;
|
| 170 |
+
}
|
| 171 |
+
|
| 172 |
+
.status.error {
|
| 173 |
+
background: #ffebee;
|
| 174 |
+
color: #d32f2f;
|
| 175 |
+
display: block;
|
| 176 |
+
}
|
| 177 |
+
|
| 178 |
+
.status.warning {
|
| 179 |
+
background: #fff3e0;
|
| 180 |
+
color: #f57c00;
|
| 181 |
+
display: block;
|
| 182 |
+
}
|
| 183 |
+
|
| 184 |
+
.video-container {
|
| 185 |
+
background: #000;
|
| 186 |
+
border-radius: 12px;
|
| 187 |
+
overflow: hidden;
|
| 188 |
+
display: none;
|
| 189 |
+
margin-top: 20px;
|
| 190 |
+
}
|
| 191 |
+
|
| 192 |
+
video {
|
| 193 |
+
width: 100%;
|
| 194 |
+
display: block;
|
| 195 |
+
}
|
| 196 |
+
|
| 197 |
+
.video-info {
|
| 198 |
+
background: #f8f9fa;
|
| 199 |
+
padding: 15px;
|
| 200 |
+
margin-top: 10px;
|
| 201 |
+
border-radius: 8px;
|
| 202 |
+
font-size: 13px;
|
| 203 |
+
color: #666;
|
| 204 |
+
}
|
| 205 |
+
|
| 206 |
+
.video-actions {
|
| 207 |
+
display: flex;
|
| 208 |
+
gap: 10px;
|
| 209 |
+
margin-top: 10px;
|
| 210 |
+
}
|
| 211 |
+
|
| 212 |
+
.video-actions button {
|
| 213 |
+
flex: 1;
|
| 214 |
+
background: linear-gradient(135deg, #11998e 0%, #38ef7d 100%);
|
| 215 |
+
}
|
| 216 |
+
|
| 217 |
+
.example-prompts {
|
| 218 |
+
display: flex;
|
| 219 |
+
flex-wrap: wrap;
|
| 220 |
+
gap: 8px;
|
| 221 |
+
margin-top: 10px;
|
| 222 |
+
}
|
| 223 |
+
|
| 224 |
+
.example-prompt {
|
| 225 |
+
display: inline-block;
|
| 226 |
+
padding: 6px 12px;
|
| 227 |
+
background: white;
|
| 228 |
+
border: 1px solid #ddd;
|
| 229 |
+
border-radius: 20px;
|
| 230 |
+
font-size: 12px;
|
| 231 |
+
cursor: pointer;
|
| 232 |
+
transition: all 0.2s;
|
| 233 |
+
}
|
| 234 |
+
|
| 235 |
+
.example-prompt:hover {
|
| 236 |
+
background: #667eea;
|
| 237 |
+
color: white;
|
| 238 |
+
border-color: #667eea;
|
| 239 |
+
}
|
| 240 |
+
|
| 241 |
+
.init-section {
|
| 242 |
+
background: #fff3e0;
|
| 243 |
+
border: 2px solid #f57c00;
|
| 244 |
+
padding: 20px;
|
| 245 |
+
border-radius: 12px;
|
| 246 |
+
margin-bottom: 20px;
|
| 247 |
+
}
|
| 248 |
+
|
| 249 |
+
.init-section h3 {
|
| 250 |
+
color: #f57c00;
|
| 251 |
+
margin-bottom: 10px;
|
| 252 |
+
}
|
| 253 |
+
|
| 254 |
+
.init-btn {
|
| 255 |
+
background: linear-gradient(135deg, #f57c00 0%, #ff9800 100%);
|
| 256 |
+
margin-top: 10px;
|
| 257 |
+
}
|
| 258 |
+
|
| 259 |
+
@media (max-width: 768px) {
|
| 260 |
+
.main-content {
|
| 261 |
+
padding: 20px;
|
| 262 |
+
}
|
| 263 |
+
}
|
| 264 |
+
</style>
|
| 265 |
+
</head>
|
| 266 |
+
<body>
|
| 267 |
+
<div class="container">
|
| 268 |
+
<div class="header">
|
| 269 |
+
<h1>π¬ Local AI Video Generator</h1>
|
| 270 |
+
<p>Generate videos locally using CogVideoX-2B on your computer</p>
|
| 271 |
+
<div class="system-info" id="system-info">
|
| 272 |
+
<div>π Checking system status...</div>
|
| 273 |
+
</div>
|
| 274 |
+
</div>
|
| 275 |
+
|
| 276 |
+
<div class="main-content">
|
| 277 |
+
<!-- Model Initialization Section -->
|
| 278 |
+
<div class="init-section" id="init-section" style="display: none;">
|
| 279 |
+
<h3>βοΈ Model Not Loaded</h3>
|
| 280 |
+
<p style="font-size: 14px; color: #666; margin-bottom: 10px;">
|
| 281 |
+
The AI model needs to be loaded before generating videos. This is a one-time process that will:
|
| 282 |
+
</p>
|
| 283 |
+
<ul style="font-size: 13px; color: #666; margin-left: 20px; margin-bottom: 10px;">
|
| 284 |
+
<li>Download CogVideoX-2B model (~5GB) on first run</li>
|
| 285 |
+
<li>Load the model into memory (2-5 minutes)</li>
|
| 286 |
+
<li>Keep the model ready for fast generation</li>
|
| 287 |
+
</ul>
|
| 288 |
+
<button class="init-btn" onclick="initializeModel()">
|
| 289 |
+
π Initialize Model
|
| 290 |
+
</button>
|
| 291 |
+
</div>
|
| 292 |
+
|
| 293 |
+
<!-- Prompt Input Section -->
|
| 294 |
+
<div class="section">
|
| 295 |
+
<h3>π Enter Your Prompt</h3>
|
| 296 |
+
<label for="prompt">Describe the video you want to create:</label>
|
| 297 |
+
<textarea
|
| 298 |
+
id="prompt"
|
| 299 |
+
rows="4"
|
| 300 |
+
placeholder="Example: A golden retriever running through a field of flowers at sunset, cinematic lighting"
|
| 301 |
+
maxlength="1000"
|
| 302 |
+
></textarea>
|
| 303 |
+
<div class="char-counter">
|
| 304 |
+
<span id="char-count">0</span>/1000 characters
|
| 305 |
+
</div>
|
| 306 |
+
|
| 307 |
+
<div style="margin-top: 15px;">
|
| 308 |
+
<label>π‘ Example Prompts:</label>
|
| 309 |
+
<div class="example-prompts">
|
| 310 |
+
<span class="example-prompt" onclick="setPrompt('A cat playing with a ball of yarn, close-up shot')">π± Cat playing</span>
|
| 311 |
+
<span class="example-prompt" onclick="setPrompt('Ocean waves crashing on a beach at sunset, aerial view')">π Ocean waves</span>
|
| 312 |
+
<span class="example-prompt" onclick="setPrompt('A bird flying through clouds, slow motion')">π¦
Bird flying</span>
|
| 313 |
+
<span class="example-prompt" onclick="setPrompt('City street with cars at night, neon lights')">π City night</span>
|
| 314 |
+
<span class="example-prompt" onclick="setPrompt('Flowers blooming in a garden, time-lapse')">πΈ Flowers blooming</span>
|
| 315 |
+
</div>
|
| 316 |
+
</div>
|
| 317 |
+
</div>
|
| 318 |
+
|
| 319 |
+
<!-- Generate Button -->
|
| 320 |
+
<button class="generate-btn" id="generate-btn" onclick="generateVideo()">
|
| 321 |
+
π¬ Generate Video
|
| 322 |
+
</button>
|
| 323 |
+
|
| 324 |
+
<!-- Status and Loader -->
|
| 325 |
+
<div class="loader" id="loader"></div>
|
| 326 |
+
<div class="status" id="status"></div>
|
| 327 |
+
|
| 328 |
+
<!-- Video Output Section -->
|
| 329 |
+
<div class="video-container" id="video-container">
|
| 330 |
+
<video id="video-output" controls></video>
|
| 331 |
+
</div>
|
| 332 |
+
|
| 333 |
+
<div id="video-info" class="video-info" style="display: none;"></div>
|
| 334 |
+
|
| 335 |
+
<div class="video-actions" id="video-actions" style="display: none;">
|
| 336 |
+
<button onclick="downloadVideo()">π₯ Download</button>
|
| 337 |
+
<button onclick="shareVideo()">π Copy Link</button>
|
| 338 |
+
</div>
|
| 339 |
+
</div>
|
| 340 |
+
</div>
|
| 341 |
+
|
| 342 |
+
<script>
|
| 343 |
+
// Global state
|
| 344 |
+
let currentVideoUrl = null;
|
| 345 |
+
let modelLoaded = false;
|
| 346 |
+
|
| 347 |
+
// Initialize app
|
| 348 |
+
window.addEventListener('load', async () => {
|
| 349 |
+
await checkServerHealth();
|
| 350 |
+
setupEventListeners();
|
| 351 |
+
});
|
| 352 |
+
|
| 353 |
+
function setupEventListeners() {
|
| 354 |
+
const promptInput = document.getElementById('prompt');
|
| 355 |
+
const charCount = document.getElementById('char-count');
|
| 356 |
+
|
| 357 |
+
promptInput.addEventListener('input', () => {
|
| 358 |
+
const length = promptInput.value.length;
|
| 359 |
+
charCount.textContent = length;
|
| 360 |
+
charCount.style.color = length > 900 ? '#d32f2f' : '#999';
|
| 361 |
+
});
|
| 362 |
+
|
| 363 |
+
promptInput.addEventListener('keydown', (e) => {
|
| 364 |
+
if (e.ctrlKey && e.key === 'Enter') {
|
| 365 |
+
generateVideo();
|
| 366 |
+
}
|
| 367 |
+
});
|
| 368 |
+
}
|
| 369 |
+
|
| 370 |
+
async function checkServerHealth() {
|
| 371 |
+
try {
|
| 372 |
+
const response = await fetch('http://localhost:5000/health');
|
| 373 |
+
const data = await response.json();
|
| 374 |
+
|
| 375 |
+
if (data.status === 'healthy') {
|
| 376 |
+
modelLoaded = data.model_loaded;
|
| 377 |
+
updateSystemInfo(data);
|
| 378 |
+
|
| 379 |
+
if (!modelLoaded) {
|
| 380 |
+
document.getElementById('init-section').style.display = 'block';
|
| 381 |
+
document.getElementById('generate-btn').disabled = true;
|
| 382 |
+
}
|
| 383 |
+
}
|
| 384 |
+
} catch (error) {
|
| 385 |
+
showStatus('β οΈ Cannot connect to server. Make sure backend_local.py is running on port 5000', 'error');
|
| 386 |
+
document.getElementById('system-info').innerHTML = '<div>β Server offline</div>';
|
| 387 |
+
}
|
| 388 |
+
}
|
| 389 |
+
|
| 390 |
+
function updateSystemInfo(data) {
|
| 391 |
+
const gpu = data.gpu_available ? 'β
GPU' : 'β οΈ CPU';
|
| 392 |
+
const model = data.model_loaded ? 'β
Model Loaded' : 'β³ Model Not Loaded';
|
| 393 |
+
const device = data.device || 'unknown';
|
| 394 |
+
|
| 395 |
+
document.getElementById('system-info').innerHTML = `
|
| 396 |
+
<div style="display: flex; justify-content: center; gap: 20px; flex-wrap: wrap;">
|
| 397 |
+
<span>${gpu}</span>
|
| 398 |
+
<span>${model}</span>
|
| 399 |
+
<span>Device: ${device}</span>
|
| 400 |
+
</div>
|
| 401 |
+
`;
|
| 402 |
+
}
|
| 403 |
+
|
| 404 |
+
async function initializeModel() {
|
| 405 |
+
const initBtn = event.target;
|
| 406 |
+
initBtn.disabled = true;
|
| 407 |
+
initBtn.textContent = 'β³ Loading Model...';
|
| 408 |
+
showStatus('π€ Initializing model... This may take 2-5 minutes. Please wait...', 'info');
|
| 409 |
+
|
| 410 |
+
try {
|
| 411 |
+
const response = await fetch('http://localhost:5000/initialize', {
|
| 412 |
+
method: 'POST'
|
| 413 |
+
});
|
| 414 |
+
|
| 415 |
+
const data = await response.json();
|
| 416 |
+
|
| 417 |
+
if (response.ok && data.status === 'success') {
|
| 418 |
+
modelLoaded = true;
|
| 419 |
+
document.getElementById('init-section').style.display = 'none';
|
| 420 |
+
document.getElementById('generate-btn').disabled = false;
|
| 421 |
+
showStatus('β
Model loaded successfully! You can now generate videos.', 'success');
|
| 422 |
+
await checkServerHealth();
|
| 423 |
+
} else {
|
| 424 |
+
throw new Error(data.message || 'Failed to load model');
|
| 425 |
+
}
|
| 426 |
+
} catch (error) {
|
| 427 |
+
console.error('Error:', error);
|
| 428 |
+
showStatus('β Failed to load model: ' + error.message, 'error');
|
| 429 |
+
initBtn.disabled = false;
|
| 430 |
+
initBtn.textContent = 'π Initialize Model';
|
| 431 |
+
}
|
| 432 |
+
}
|
| 433 |
+
|
| 434 |
+
function setPrompt(text) {
|
| 435 |
+
document.getElementById('prompt').value = text;
|
| 436 |
+
document.getElementById('prompt').dispatchEvent(new Event('input'));
|
| 437 |
+
}
|
| 438 |
+
|
| 439 |
+
function showStatus(message, type) {
|
| 440 |
+
const status = document.getElementById('status');
|
| 441 |
+
status.textContent = message;
|
| 442 |
+
status.className = 'status ' + type;
|
| 443 |
+
}
|
| 444 |
+
|
| 445 |
+
async function generateVideo() {
|
| 446 |
+
const prompt = document.getElementById('prompt').value.trim();
|
| 447 |
+
|
| 448 |
+
if (!prompt) {
|
| 449 |
+
showStatus('Please enter a prompt', 'error');
|
| 450 |
+
return;
|
| 451 |
+
}
|
| 452 |
+
|
| 453 |
+
if (prompt.length < 3) {
|
| 454 |
+
showStatus('Prompt must be at least 3 characters long', 'error');
|
| 455 |
+
return;
|
| 456 |
+
}
|
| 457 |
+
|
| 458 |
+
if (!modelLoaded) {
|
| 459 |
+
showStatus('Please initialize the model first', 'warning');
|
| 460 |
+
return;
|
| 461 |
+
}
|
| 462 |
+
|
| 463 |
+
// UI updates
|
| 464 |
+
const generateBtn = document.getElementById('generate-btn');
|
| 465 |
+
generateBtn.disabled = true;
|
| 466 |
+
generateBtn.textContent = 'π¬ Generating...';
|
| 467 |
+
document.getElementById('loader').style.display = 'block';
|
| 468 |
+
document.getElementById('video-container').style.display = 'none';
|
| 469 |
+
document.getElementById('video-actions').style.display = 'none';
|
| 470 |
+
document.getElementById('video-info').style.display = 'none';
|
| 471 |
+
showStatus('π¨ Generating video... This will take 30-120 seconds depending on your hardware', 'info');
|
| 472 |
+
|
| 473 |
+
try {
|
| 474 |
+
const response = await fetch('http://localhost:5000/generate-video', {
|
| 475 |
+
method: 'POST',
|
| 476 |
+
headers: { 'Content-Type': 'application/json' },
|
| 477 |
+
body: JSON.stringify({ prompt })
|
| 478 |
+
});
|
| 479 |
+
|
| 480 |
+
const data = await response.json();
|
| 481 |
+
|
| 482 |
+
if (!response.ok || data.error) {
|
| 483 |
+
throw new Error(data.error || 'Failed to generate video');
|
| 484 |
+
}
|
| 485 |
+
|
| 486 |
+
// Success
|
| 487 |
+
currentVideoUrl = data.video_url;
|
| 488 |
+
const videoOutput = document.getElementById('video-output');
|
| 489 |
+
|
| 490 |
+
// Handle local file path
|
| 491 |
+
if (currentVideoUrl.startsWith('/download/')) {
|
| 492 |
+
videoOutput.src = 'http://localhost:5000' + currentVideoUrl;
|
| 493 |
+
} else {
|
| 494 |
+
videoOutput.src = currentVideoUrl;
|
| 495 |
+
}
|
| 496 |
+
|
| 497 |
+
document.getElementById('video-container').style.display = 'block';
|
| 498 |
+
document.getElementById('video-actions').style.display = 'flex';
|
| 499 |
+
|
| 500 |
+
// Show video info
|
| 501 |
+
const videoInfo = document.getElementById('video-info');
|
| 502 |
+
videoInfo.innerHTML = `
|
| 503 |
+
<strong>Model:</strong> ${data.model_name || 'CogVideoX-2B'}<br>
|
| 504 |
+
<strong>Device:</strong> ${data.device || 'unknown'}<br>
|
| 505 |
+
<strong>Frames:</strong> ${data.num_frames || 49} (~6 seconds)<br>
|
| 506 |
+
<strong>Prompt:</strong> ${data.prompt}<br>
|
| 507 |
+
<strong>Generated:</strong> ${new Date(data.timestamp).toLocaleString()}
|
| 508 |
+
`;
|
| 509 |
+
videoInfo.style.display = 'block';
|
| 510 |
+
|
| 511 |
+
showStatus('β
Video generated successfully!', 'success');
|
| 512 |
+
videoOutput.play().catch(() => {});
|
| 513 |
+
|
| 514 |
+
} catch (error) {
|
| 515 |
+
console.error('Error:', error);
|
| 516 |
+
showStatus('β Error: ' + error.message, 'error');
|
| 517 |
+
} finally {
|
| 518 |
+
generateBtn.disabled = false;
|
| 519 |
+
generateBtn.textContent = 'π¬ Generate Video';
|
| 520 |
+
document.getElementById('loader').style.display = 'none';
|
| 521 |
+
}
|
| 522 |
+
}
|
| 523 |
+
|
| 524 |
+
async function downloadVideo() {
|
| 525 |
+
if (!currentVideoUrl) return;
|
| 526 |
+
|
| 527 |
+
try {
|
| 528 |
+
showStatus('π₯ Preparing download...', 'info');
|
| 529 |
+
|
| 530 |
+
const videoUrl = currentVideoUrl.startsWith('/download/')
|
| 531 |
+
? 'http://localhost:5000' + currentVideoUrl
|
| 532 |
+
: currentVideoUrl;
|
| 533 |
+
|
| 534 |
+
const response = await fetch(videoUrl);
|
| 535 |
+
const blob = await response.blob();
|
| 536 |
+
const url = window.URL.createObjectURL(blob);
|
| 537 |
+
const a = document.createElement('a');
|
| 538 |
+
a.href = url;
|
| 539 |
+
a.download = `ai-video-local-${Date.now()}.mp4`;
|
| 540 |
+
document.body.appendChild(a);
|
| 541 |
+
a.click();
|
| 542 |
+
window.URL.revokeObjectURL(url);
|
| 543 |
+
document.body.removeChild(a);
|
| 544 |
+
|
| 545 |
+
showStatus('β
Download started!', 'success');
|
| 546 |
+
} catch (error) {
|
| 547 |
+
console.error('Download error:', error);
|
| 548 |
+
showStatus('β Failed to download video', 'error');
|
| 549 |
+
}
|
| 550 |
+
}
|
| 551 |
+
|
| 552 |
+
function shareVideo() {
|
| 553 |
+
if (!currentVideoUrl) return;
|
| 554 |
+
|
| 555 |
+
const videoUrl = currentVideoUrl.startsWith('/download/')
|
| 556 |
+
? 'http://localhost:5000' + currentVideoUrl
|
| 557 |
+
: currentVideoUrl;
|
| 558 |
+
|
| 559 |
+
navigator.clipboard.writeText(videoUrl).then(() => {
|
| 560 |
+
showStatus('π Video URL copied to clipboard!', 'success');
|
| 561 |
+
}).catch(() => {
|
| 562 |
+
showStatus('β Failed to copy URL', 'error');
|
| 563 |
+
});
|
| 564 |
+
}
|
| 565 |
+
</script>
|
| 566 |
+
</body>
|
| 567 |
+
</html>
|
models_config.py
ADDED
|
@@ -0,0 +1,178 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Configuration for multiple video generation models
|
| 3 |
+
Based on trending Hugging Face spaces and Hailuo-inspired features
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
VIDEO_MODELS = {
|
| 7 |
+
"cogvideox-5b": {
|
| 8 |
+
"name": "CogVideoX-5B (THUDM)",
|
| 9 |
+
"space_url": "THUDM/CogVideoX-5B-Space",
|
| 10 |
+
"description": "High-quality text-to-video generation (6 seconds, 720p)",
|
| 11 |
+
"type": "text-to-video",
|
| 12 |
+
"features": ["high_quality", "longer_videos"],
|
| 13 |
+
"max_frames": 49,
|
| 14 |
+
"resolution": (720, 480),
|
| 15 |
+
"api_name": "/infer",
|
| 16 |
+
"params": {
|
| 17 |
+
"num_inference_steps": 50,
|
| 18 |
+
"guidance_scale": 6.0,
|
| 19 |
+
}
|
| 20 |
+
},
|
| 21 |
+
"cogvideox-2b": {
|
| 22 |
+
"name": "CogVideoX-2B (Faster)",
|
| 23 |
+
"space_url": "THUDM/CogVideoX-2B-Space",
|
| 24 |
+
"description": "Faster version of CogVideoX with good quality",
|
| 25 |
+
"type": "text-to-video",
|
| 26 |
+
"features": ["fast", "good_quality"],
|
| 27 |
+
"max_frames": 49,
|
| 28 |
+
"resolution": (720, 480),
|
| 29 |
+
"api_name": "/infer",
|
| 30 |
+
"params": {
|
| 31 |
+
"num_inference_steps": 30,
|
| 32 |
+
"guidance_scale": 6.0,
|
| 33 |
+
}
|
| 34 |
+
},
|
| 35 |
+
"hunyuan-video": {
|
| 36 |
+
"name": "HunyuanVideo (Tencent)",
|
| 37 |
+
"space_url": "tencent/HunyuanVideo",
|
| 38 |
+
"description": "State-of-the-art video generation by Tencent (may be slow/unavailable)",
|
| 39 |
+
"type": "text-to-video",
|
| 40 |
+
"features": ["sota", "high_quality"],
|
| 41 |
+
"max_frames": 129,
|
| 42 |
+
"resolution": (1280, 720),
|
| 43 |
+
"api_name": "/generate",
|
| 44 |
+
"params": {
|
| 45 |
+
"num_inference_steps": 50,
|
| 46 |
+
}
|
| 47 |
+
},
|
| 48 |
+
"stable-video-diffusion": {
|
| 49 |
+
"name": "Stable Video Diffusion",
|
| 50 |
+
"space_url": "multimodalart/stable-video-diffusion",
|
| 51 |
+
"description": "Image-to-video animation (14-25 frames)",
|
| 52 |
+
"type": "image-to-video",
|
| 53 |
+
"features": ["image_animation", "stable"],
|
| 54 |
+
"max_frames": 25,
|
| 55 |
+
"resolution": (576, 576),
|
| 56 |
+
"api_name": "/generate_video",
|
| 57 |
+
"params": {
|
| 58 |
+
"num_frames": 14,
|
| 59 |
+
"fps": 7,
|
| 60 |
+
"motion_bucket_id": 127,
|
| 61 |
+
}
|
| 62 |
+
},
|
| 63 |
+
"demo": {
|
| 64 |
+
"name": "Demo Mode (Test Video)",
|
| 65 |
+
"space_url": "demo",
|
| 66 |
+
"description": "Demo mode - returns sample video for testing UI",
|
| 67 |
+
"type": "text-to-video",
|
| 68 |
+
"features": ["demo", "instant"],
|
| 69 |
+
"max_frames": 0,
|
| 70 |
+
"resolution": (1920, 1080),
|
| 71 |
+
"api_name": "/test",
|
| 72 |
+
"params": {}
|
| 73 |
+
}
|
| 74 |
+
}
|
| 75 |
+
|
| 76 |
+
# Camera movements inspired by Hailuo Director model
|
| 77 |
+
CAMERA_MOVEMENTS = [
|
| 78 |
+
{"name": "Static", "tag": "", "description": "No camera movement"},
|
| 79 |
+
{"name": "Zoom In", "tag": "[Zoom in]", "description": "Camera moves closer to subject"},
|
| 80 |
+
{"name": "Zoom Out", "tag": "[Zoom out]", "description": "Camera moves away from subject"},
|
| 81 |
+
{"name": "Pan Left", "tag": "[Pan left]", "description": "Camera pans to the left"},
|
| 82 |
+
{"name": "Pan Right", "tag": "[Pan right]", "description": "Camera pans to the right"},
|
| 83 |
+
{"name": "Tilt Up", "tag": "[Tilt up]", "description": "Camera tilts upward"},
|
| 84 |
+
{"name": "Tilt Down", "tag": "[Tilt down]", "description": "Camera tilts downward"},
|
| 85 |
+
{"name": "Tracking Shot", "tag": "[Tracking shot]", "description": "Camera follows subject"},
|
| 86 |
+
{"name": "Dolly In", "tag": "[Dolly in]", "description": "Smooth forward movement"},
|
| 87 |
+
{"name": "Dolly Out", "tag": "[Dolly out]", "description": "Smooth backward movement"},
|
| 88 |
+
{"name": "Crane Shot", "tag": "[Crane shot]", "description": "Vertical camera movement"},
|
| 89 |
+
{"name": "Shake", "tag": "[Shake]", "description": "Handheld camera effect"},
|
| 90 |
+
]
|
| 91 |
+
|
| 92 |
+
# Visual effects inspired by Hailuo
|
| 93 |
+
VISUAL_EFFECTS = [
|
| 94 |
+
{"name": "None", "tag": "", "description": "No special effects"},
|
| 95 |
+
{"name": "Cinematic", "tag": "cinematic lighting, film grain", "description": "Movie-like quality"},
|
| 96 |
+
{"name": "Dramatic", "tag": "dramatic lighting, high contrast", "description": "Strong shadows and highlights"},
|
| 97 |
+
{"name": "Soft", "tag": "soft lighting, gentle glow", "description": "Soft, diffused light"},
|
| 98 |
+
{"name": "Golden Hour", "tag": "golden hour, warm sunset lighting", "description": "Warm, natural light"},
|
| 99 |
+
{"name": "Foggy", "tag": "fog, misty atmosphere", "description": "Atmospheric fog effect"},
|
| 100 |
+
{"name": "Rainy", "tag": "rain, wet surfaces, water droplets", "description": "Rain and wet environment"},
|
| 101 |
+
{"name": "Slow Motion", "tag": "slow motion, high fps", "description": "Slow-motion effect"},
|
| 102 |
+
]
|
| 103 |
+
|
| 104 |
+
# Video styles
|
| 105 |
+
VIDEO_STYLES = [
|
| 106 |
+
{"name": "Realistic", "tag": "photorealistic, 4k, high detail", "description": "Photorealistic style"},
|
| 107 |
+
{"name": "Cinematic", "tag": "cinematic, movie scene, professional", "description": "Hollywood movie style"},
|
| 108 |
+
{"name": "Anime", "tag": "anime style, animated", "description": "Japanese animation style"},
|
| 109 |
+
{"name": "Cartoon", "tag": "cartoon style, animated", "description": "Western cartoon style"},
|
| 110 |
+
{"name": "3D Render", "tag": "3D render, CGI, Pixar style", "description": "3D animated style"},
|
| 111 |
+
{"name": "Vintage", "tag": "vintage film, retro, old footage", "description": "Old film aesthetic"},
|
| 112 |
+
{"name": "Sci-Fi", "tag": "sci-fi, futuristic, cyberpunk", "description": "Science fiction style"},
|
| 113 |
+
{"name": "Fantasy", "tag": "fantasy, magical, ethereal", "description": "Fantasy world style"},
|
| 114 |
+
]
|
| 115 |
+
|
| 116 |
+
# Example prompts categorized by use case
|
| 117 |
+
EXAMPLE_PROMPTS = {
|
| 118 |
+
"Nature": [
|
| 119 |
+
"A majestic waterfall cascading down mossy rocks in a lush rainforest",
|
| 120 |
+
"Ocean waves crashing on a rocky shore at sunset with seagulls flying",
|
| 121 |
+
"A field of sunflowers swaying in the breeze under a blue sky",
|
| 122 |
+
"Northern lights dancing across the Arctic sky over snowy mountains",
|
| 123 |
+
],
|
| 124 |
+
"Animals": [
|
| 125 |
+
"A golden retriever running through a field of flowers at sunset",
|
| 126 |
+
"A majestic eagle soaring through clouds above mountain peaks",
|
| 127 |
+
"A playful dolphin jumping out of crystal clear ocean water",
|
| 128 |
+
"A red fox walking through a snowy forest in winter",
|
| 129 |
+
],
|
| 130 |
+
"Urban": [
|
| 131 |
+
"City street with cars and pedestrians at night, neon lights reflecting on wet pavement",
|
| 132 |
+
"Time-lapse of clouds moving over modern skyscrapers in downtown",
|
| 133 |
+
"A busy coffee shop with people working on laptops, warm lighting",
|
| 134 |
+
"Subway train arriving at platform with commuters waiting",
|
| 135 |
+
],
|
| 136 |
+
"Fantasy": [
|
| 137 |
+
"A magical portal opening in an ancient forest with glowing particles",
|
| 138 |
+
"A dragon flying over a medieval castle at dawn",
|
| 139 |
+
"Floating islands in the sky connected by glowing bridges",
|
| 140 |
+
"A wizard casting a spell with colorful magical energy swirling",
|
| 141 |
+
],
|
| 142 |
+
"Action": [
|
| 143 |
+
"A sports car drifting around a corner on a race track",
|
| 144 |
+
"A skateboarder performing tricks in an urban skate park",
|
| 145 |
+
"A surfer riding a massive wave in slow motion",
|
| 146 |
+
"A basketball player making a slam dunk in an arena",
|
| 147 |
+
]
|
| 148 |
+
}
|
| 149 |
+
|
| 150 |
+
def get_model_info(model_id):
|
| 151 |
+
"""Get information about a specific model"""
|
| 152 |
+
return VIDEO_MODELS.get(model_id, VIDEO_MODELS["cogvideox-5b"])
|
| 153 |
+
|
| 154 |
+
def get_available_models():
|
| 155 |
+
"""Get list of available models"""
|
| 156 |
+
return {k: {"name": v["name"], "description": v["description"], "type": v["type"]}
|
| 157 |
+
for k, v in VIDEO_MODELS.items()}
|
| 158 |
+
|
| 159 |
+
def build_enhanced_prompt(base_prompt, camera_movement="", visual_effect="", style=""):
|
| 160 |
+
"""Build an enhanced prompt with camera movements and effects (Hailuo-style)"""
|
| 161 |
+
prompt_parts = []
|
| 162 |
+
|
| 163 |
+
# Add style prefix if specified
|
| 164 |
+
if style:
|
| 165 |
+
prompt_parts.append(style)
|
| 166 |
+
|
| 167 |
+
# Add base prompt
|
| 168 |
+
prompt_parts.append(base_prompt)
|
| 169 |
+
|
| 170 |
+
# Add camera movement
|
| 171 |
+
if camera_movement:
|
| 172 |
+
prompt_parts.append(camera_movement)
|
| 173 |
+
|
| 174 |
+
# Add visual effects
|
| 175 |
+
if visual_effect:
|
| 176 |
+
prompt_parts.append(visual_effect)
|
| 177 |
+
|
| 178 |
+
return ", ".join(filter(None, prompt_parts))
|
package.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"name": "ai-video-vercel",
|
| 3 |
+
"version": "1.0.0",
|
| 4 |
+
"private": true,
|
| 5 |
+
"type": "module",
|
| 6 |
+
"license": "MIT",
|
| 7 |
+
"dependencies": {
|
| 8 |
+
"replicate": "^1.0.7"
|
| 9 |
+
}
|
| 10 |
+
}
|
requirements.txt
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Flask==3.0.0
|
| 2 |
+
Flask-CORS==4.0.0
|
| 3 |
+
gradio-client==0.7.1
|
| 4 |
+
python-dotenv==1.0.0
|
| 5 |
+
requests==2.31.0
|
| 6 |
+
Pillow>=10.0.0
|
requirements_local.txt
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Requirements for Local Video Generation (backend_local.py)
|
| 2 |
+
Flask==3.0.0
|
| 3 |
+
Flask-CORS==4.0.0
|
| 4 |
+
torch>=2.0.0
|
| 5 |
+
diffusers>=0.30.0
|
| 6 |
+
transformers>=4.44.0
|
| 7 |
+
accelerate>=0.25.0
|
| 8 |
+
sentencepiece>=0.1.99
|
| 9 |
+
protobuf>=3.20.0
|
| 10 |
+
imageio>=2.33.0
|
| 11 |
+
imageio-ffmpeg>=0.4.9
|
start_local.bat
ADDED
|
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
@echo off
|
| 2 |
+
REM Local AI Video Generator Startup Script for Windows
|
| 3 |
+
|
| 4 |
+
echo π¬ Starting Local AI Video Generator...
|
| 5 |
+
echo ========================================
|
| 6 |
+
echo.
|
| 7 |
+
|
| 8 |
+
REM Check if Python is installed
|
| 9 |
+
python --version >nul 2>&1
|
| 10 |
+
if errorlevel 1 (
|
| 11 |
+
echo β Python is not installed. Please install Python 3.9 or higher.
|
| 12 |
+
pause
|
| 13 |
+
exit /b 1
|
| 14 |
+
)
|
| 15 |
+
|
| 16 |
+
REM Check if virtual environment exists
|
| 17 |
+
if not exist "venv" (
|
| 18 |
+
echo π¦ Creating virtual environment...
|
| 19 |
+
python -m venv venv
|
| 20 |
+
)
|
| 21 |
+
|
| 22 |
+
REM Activate virtual environment
|
| 23 |
+
echo π§ Activating virtual environment...
|
| 24 |
+
call venv\Scripts\activate.bat
|
| 25 |
+
|
| 26 |
+
REM Check if requirements are installed
|
| 27 |
+
if not exist "venv\.requirements_installed" (
|
| 28 |
+
echo π₯ Installing dependencies (this may take a few minutes)...
|
| 29 |
+
echo.
|
| 30 |
+
echo β οΈ Note: If you have an NVIDIA GPU, install PyTorch with CUDA first:
|
| 31 |
+
echo pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
|
| 32 |
+
echo.
|
| 33 |
+
pause
|
| 34 |
+
|
| 35 |
+
pip install -r requirements_local.txt
|
| 36 |
+
type nul > venv\.requirements_installed
|
| 37 |
+
echo β
Dependencies installed!
|
| 38 |
+
)
|
| 39 |
+
|
| 40 |
+
REM Start the backend server
|
| 41 |
+
echo.
|
| 42 |
+
echo π Starting backend server on http://localhost:5000
|
| 43 |
+
echo π Open index_local.html in your browser to use the app
|
| 44 |
+
echo.
|
| 45 |
+
echo Press Ctrl+C to stop the server
|
| 46 |
+
echo ========================================
|
| 47 |
+
echo.
|
| 48 |
+
|
| 49 |
+
python backend_local.py
|
start_local.sh
ADDED
|
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
|
| 3 |
+
# Local AI Video Generator Startup Script
|
| 4 |
+
|
| 5 |
+
echo "π¬ Starting Local AI Video Generator..."
|
| 6 |
+
echo "========================================"
|
| 7 |
+
echo ""
|
| 8 |
+
|
| 9 |
+
# Check if Python is installed
|
| 10 |
+
if ! command -v python3 &> /dev/null; then
|
| 11 |
+
echo "β Python 3 is not installed. Please install Python 3.9 or higher."
|
| 12 |
+
exit 1
|
| 13 |
+
fi
|
| 14 |
+
|
| 15 |
+
# Check if virtual environment exists
|
| 16 |
+
if [ ! -d "venv" ]; then
|
| 17 |
+
echo "π¦ Creating virtual environment..."
|
| 18 |
+
python3 -m venv venv
|
| 19 |
+
fi
|
| 20 |
+
|
| 21 |
+
# Activate virtual environment
|
| 22 |
+
echo "π§ Activating virtual environment..."
|
| 23 |
+
source venv/bin/activate
|
| 24 |
+
|
| 25 |
+
# Check if requirements are installed
|
| 26 |
+
if [ ! -f "venv/.requirements_installed" ]; then
|
| 27 |
+
echo "π₯ Installing dependencies (this may take a few minutes)..."
|
| 28 |
+
echo ""
|
| 29 |
+
echo "β οΈ Note: If you have an NVIDIA GPU, install PyTorch with CUDA first:"
|
| 30 |
+
echo " pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118"
|
| 31 |
+
echo ""
|
| 32 |
+
read -p "Press Enter to continue with installation..."
|
| 33 |
+
|
| 34 |
+
pip install -r requirements_local.txt
|
| 35 |
+
touch venv/.requirements_installed
|
| 36 |
+
echo "β
Dependencies installed!"
|
| 37 |
+
fi
|
| 38 |
+
|
| 39 |
+
# Start the backend server
|
| 40 |
+
echo ""
|
| 41 |
+
echo "π Starting backend server on http://localhost:5000"
|
| 42 |
+
echo "π Open index_local.html in your browser to use the app"
|
| 43 |
+
echo ""
|
| 44 |
+
echo "Press Ctrl+C to stop the server"
|
| 45 |
+
echo "========================================"
|
| 46 |
+
echo ""
|
| 47 |
+
|
| 48 |
+
python backend_local.py
|