A newer version of the Gradio SDK is available: 6.14.0
title: Video Watermark Remover
emoji: π¦
colorFrom: green
colorTo: blue
sdk: gradio
sdk_version: 4.44.0
python_version: '3.10'
app_file: app.py
pinned: false
license: apache-2.0
Video Watermark Remover
Self-hosted Hugging Face Space for removing static, opaque watermarks from video footage.
Modes
| Mode | Model | Speed | Best for |
|---|---|---|---|
| Fast | LaMa (per-frame) | Seconds | Sky, water, foliage β low-frequency backgrounds |
| Quality | Wan2.1-VACE-14B + lightx2v 4-step distill, FP8 on H200 | ~2-3 min | Structured or textured backgrounds |
How it works
- Upload a video clip β up to 60 s; the first 15 s is processed at 1080p
- Paint over the watermark on the first frame using the brush, or:
- Snap to Rectangle β scribble roughly, click to convert to a clean filled rectangle
- Clear Mask β start the drawing over
- Choose Fast or Quality mode
- Hit Remove Watermark β output is composited back at the source resolution
Crop-inpaint-composite
The pipeline never runs the model on the full 1920Γ1080 frame:
- The mask determines a tight crop, expanded to a model-friendly resolution with surrounding context
- Only that crop (~7Γ fewer pixels) is processed
- The result is feather-blended back into the original frame
- All other pixels are byte-identical to the source
V-Log / HDR colour metadata is preserved via FFmpeg flag passthrough (10-bit H.265 output for HDR sources).
Platform
- ZeroGPU (Nvidia H200 MIG slice)
- PyTorch + diffusers β₯0.34
- ffmpeg via
packages.txt
Upstream protection
All model files come from a private mirror at JackIsNotInTheBox/Video_Watermark_Remover_Checkpoints:
- LaMa:
lama/big-lama.ptβLAMA_MODEL_URLdefaults to the mirror, prefetched intotorch.hubcache beforesimple_lama_inpaintingcan reach for its hardcoded GitHub release URL - VACE-14B:
vace-14b/β full diffusers package, loaded withlocal_files_only=Trueso any cache miss errors loudly instead of silently fetching from upstream HF Hub - Distill LoRA:
loras/wan2.1_t2v_14b_lora_rank64_lightx2v_4step.safetensorsβ samelocal_files_only=Trueenforcement
The Space stays functional even if upstream Wan-AI / lightx2v / GitHub release sources are deleted: at runtime nothing reaches for them.
On the first deploy, ~75 GB of VACE weights are downloaded from the mirror to the persistent cache in a background thread. Fast mode works immediately; Quality mode blocks until prewarm finishes (the UI shows a progress message during the wait).
License
- Pipeline code: Apache 2.0
- LaMa: Apache 2.0
- Wan2.1-VACE-14B: Apache 2.0 (Wan-AI repo)
- lightx2v 4-step distill LoRA: Apache 2.0