| # VideoBackgroundReplacer2 Deployment Guide | |
| This guide provides instructions for deploying the VideoBackgroundReplacer2 application to Hugging Face Spaces with GPU acceleration. | |
| ## Prerequisites | |
| - Docker | |
| - Git | |
| - Python 3.8+ | |
| - NVIDIA Container Toolkit (for local GPU testing) | |
| - Hugging Face account with access to GPU Spaces | |
| ## Local Development | |
| ### 1. Clone the repository | |
| ```bash | |
| git clone <repository-url> | |
| cd VideoBackgroundReplacer2 | |
| ``` | |
| ### 2. Build the Docker image | |
| ```bash | |
| # Make the build script executable | |
| chmod +x build_and_deploy.sh | |
| # Build the image | |
| ./build_and_deploy.sh | |
| ``` | |
| ### 3. Run the container locally | |
| ```bash | |
| docker run --gpus all -p 7860:7860 -v $(pwd)/checkpoints:/home/user/app/checkpoints videobackgroundreplacer2:latest | |
| ``` | |
| ## Hugging Face Spaces Deployment | |
| ### 1. Create a new Space | |
| - Go to [Hugging Face Spaces](https://huggingface.co/spaces) | |
| - Click "Create new Space" | |
| - Select "Docker" as the SDK | |
| - Choose a name and set the space to private if needed | |
| - Select GPU as the hardware | |
| ### 2. Configure the Space | |
| Add the following environment variables to your Space settings: | |
| - `SAM2_DEVICE`: `cuda` | |
| - `MATANY_DEVICE`: `cuda` | |
| - `PYTORCH_CUDA_ALLOC_CONF`: `max_split_size_mb:256,garbage_collection_threshold:0.8` | |
| - `TORCH_CUDA_ARCH_LIST`: `7.5 8.0 8.6+PTX` | |
| ### 3. Deploy to Hugging Face | |
| ```bash | |
| # Set your Hugging Face token | |
| export HF_TOKEN=your_hf_token | |
| export HF_USERNAME=your_username | |
| # Build and deploy | |
| ./build_and_deploy.sh | |
| ``` | |
| ## Health Check | |
| You can verify the installation by running: | |
| ```bash | |
| docker run --rm videobackgroundreplacer2:latest python3 health_check.py | |
| ``` | |
| ## Troubleshooting | |
| ### Build Failures | |
| - Ensure you have enough disk space (at least 10GB free) | |
| - Check Docker logs for specific error messages | |
| - Verify your internet connection is stable | |
| ### Runtime Issues | |
| - Check container logs: `docker logs <container_id>` | |
| - Verify GPU is detected: `nvidia-smi` inside the container | |
| - Check disk space: `df -h` | |
| ## Performance Optimization | |
| - For faster inference, use the `sam2_hiera_tiny` model | |
| - Adjust batch size based on available GPU memory | |
| - Enable gradient checkpointing for large models | |
| ## Monitoring | |
| - Use `nvidia-smi` to monitor GPU usage | |
| - Check container logs for any warnings or errors | |
| - Monitor memory usage with `htop` or similar tools | |