File size: 6,157 Bytes
54ed165 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 | # π¬ AI Video Generator - Free Hailuo Clone
A free, open-source AI-powered video generation application that creates videos from text prompts using Hugging Face's Zeroscope model.
## β¨ Features
- π¨ **Text-to-Video Generation**: Create videos from simple text descriptions
- π― **Modern UI**: Beautiful, responsive interface with gradient design
- π₯ **Video Download**: Download generated videos directly to your device
- β‘ **Real-time Status**: Live feedback with loading indicators and status messages
- π **Input Validation**: Robust validation and error handling
- π **Health Monitoring**: Server health check endpoint
- π **Example Prompts**: Quick-start with pre-made prompt examples
- π **Character Counter**: Track your prompt length in real-time
## π Quick Start
### Prerequisites
- Python 3.8 or higher
- pip (Python package manager)
- Modern web browser (Chrome, Firefox, Safari, or Edge)
### Installation
1. **Clone or navigate to the project directory**:
```bash
cd /Users/sravyalu/VideoAI/hailuo-clone
```
2. **Create and activate a virtual environment** (recommended):
```bash
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```
3. **Install dependencies**:
```bash
pip install -r requirements.txt
```
4. **Set up environment variables**:
```bash
cp .env.example .env
```
Edit `.env` file if you need to customize settings:
```env
HF_SPACE_URL=https://cerspense-zeroscope-v2-xl.hf.space/
FLASK_PORT=5000
FLASK_DEBUG=False
```
### Running the Application
1. **Start the backend server**:
```bash
python backend.py
```
You should see:
```
INFO - Starting Flask server on port 5000 (debug=False)
INFO - Successfully connected to Hugging Face Space
```
2. **Open the frontend**:
- Simply open `index.html` in your web browser
- Or use a local server:
```bash
python -m http.server 8000
```
Then visit: `http://localhost:8000`
3. **Generate your first video**:
- Enter a text prompt (e.g., "A dog running in a park")
- Click "Generate Video"
- Wait 10-60 seconds for the AI to create your video
- Watch and download your generated video!
## π Project Structure
```
hailuo-clone/
βββ backend.py # Flask backend server
βββ index.html # Frontend UI
βββ requirements.txt # Python dependencies
βββ .env.example # Environment variables template
βββ .gitignore # Git ignore rules
βββ README.md # This file
βββ app.log # Application logs (created at runtime)
```
## π οΈ Configuration
### Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `HF_SPACE_URL` | `https://cerspense-zeroscope-v2-xl.hf.space/` | Hugging Face Space URL |
| `FLASK_PORT` | `5000` | Backend server port |
| `FLASK_DEBUG` | `False` | Enable/disable debug mode |
### Prompt Limits
- **Minimum length**: 3 characters
- **Maximum length**: 500 characters
## π§ API Endpoints
### Health Check
```
GET /health
```
Returns server health status and client initialization state.
**Response**:
```json
{
"status": "healthy",
"timestamp": "2025-10-02T17:38:51-07:00",
"client_initialized": true
}
```
### Generate Video
```
POST /generate-video
Content-Type: application/json
```
**Request Body**:
```json
{
"prompt": "A dog running in a park"
}
```
**Success Response** (200):
```json
{
"video_url": "https://...",
"prompt": "A dog running in a park",
"timestamp": "2025-10-02T17:38:51-07:00"
}
```
**Error Response** (400/500/503/504):
```json
{
"error": "Error message here"
}
```
## π Troubleshooting
### Backend Issues
**Problem**: `ModuleNotFoundError: No module named 'flask'`
- **Solution**: Install dependencies: `pip install -r requirements.txt`
**Problem**: `Failed to initialize Gradio client`
- **Solution**: Check your internet connection and verify the Hugging Face Space URL is correct
**Problem**: `Port 5000 already in use`
- **Solution**: Change the port in `.env` file or stop the process using port 5000
### Frontend Issues
**Problem**: "Cannot connect to server" error
- **Solution**: Ensure the backend is running on port 5000
**Problem**: Video doesn't play
- **Solution**: Check browser console for errors. Some browsers block autoplay.
**Problem**: CORS errors
- **Solution**: The backend has CORS enabled. Make sure you're using the correct URL.
## π Logging
Application logs are saved to `app.log` in the project directory. Logs include:
- Server startup/shutdown events
- Video generation requests
- Errors and exceptions
- Client connection status
View logs in real-time:
```bash
tail -f app.log
```
## π Security Notes
- Never commit `.env` file to version control
- Don't run in debug mode in production (`FLASK_DEBUG=False`)
- Consider adding rate limiting for production use
- Validate and sanitize all user inputs (already implemented)
## π Production Deployment
For production deployment, consider:
1. **Use a production WSGI server** (e.g., Gunicorn):
```bash
pip install gunicorn
gunicorn -w 4 -b 0.0.0.0:5000 backend:app
```
2. **Set up a reverse proxy** (e.g., Nginx)
3. **Enable HTTPS** with SSL certificates
4. **Add rate limiting** to prevent abuse
5. **Set up monitoring** and alerting
6. **Use environment variables** for sensitive configuration
## π€ Contributing
Contributions are welcome! Please feel free to submit issues or pull requests.
## π License
This project is open source and available under the MIT License.
## π Acknowledgments
- Built with [Flask](https://flask.palletsprojects.com/)
- Uses [Gradio Client](https://www.gradio.app/) for Hugging Face integration
- Video generation powered by [Zeroscope](https://huggingface.co/cerspense/zeroscope_v2_XL)
## π Support
If you encounter any issues or have questions:
1. Check the troubleshooting section above
2. Review the logs in `app.log`
3. Open an issue on the project repository
---
**Made with β€οΈ for the AI community**
|