Jasna TensorRT Engines - RTX 5080 (Blackwell SM 12.0)
Pre-compiled TensorRT engine files for Jasna video restoration tool, built on NVIDIA RTX 5080.
Compatibility
These engines are ONLY compatible with GPUs sharing the same architecture:
- GPU Architecture: Blackwell (SM 12.0)
- Compatible GPUs: RTX 5070 Ti, RTX 5080, RTX 5090
- TensorRT: 10.14.1.48
- CUDA: 13.1
- PyTorch: 2.10.0a0+nv26.01
- Driver: 590.48.01+
- Jasna: 0.4.1+
Will NOT work on Ada Lovelace (RTX 4090), Ampere (RTX 3090), or older GPUs.
Files
- lada_mosaic_restoration_model_generic_v1.2_clip10.trt_fp16.linux.engine (250 MB) - BasicVSR++ restoration model (clip_size=10, FP16)
- lada_mosaic_restoration_model_generic_v1.2_clip40.trt_fp16.linux.engine (967 MB) - BasicVSR++ restoration model (clip_size=40, FP16)
- rfdetr-v3.bs4.fp16.linux.engine (77 MB) - RF-DETR detection model (batch_size=4, FP16)
Build Environment
- GPU: NVIDIA GeForce RTX 5080 (16 GB VRAM)
- OS: Ubuntu 22.04 (Docker container)
- Platform: Vast.ai
- NVIDIA Driver: 590.48.01
- CUDA: 13.1 (V13.1.115)
- TensorRT: 10.14.1.48
- PyTorch: 2.10.0a0+a36e1d39eb.nv26.01
Usage
Download the engine files and place them in jasna/model_weights/, then run:
./jasna --input "input.mp4" --output "output.mp4" --fp16 --max-clip-size 40 --log-level info
Notes
- clip10 engine uses less VRAM (~4 GB), suitable for GPUs with limited memory
- clip40 engine uses more VRAM (~8 GB) but processes faster
- clip90 compilation failed with OOM on 16 GB VRAM; may work with more swap space
- If engines don't work on your setup, delete them and let Jasna recompile automatically
Known Issues
On Vast.ai, multi-GPU hosts with single GPU allocation may cause NVDEC/NVENC failures due to nvidia-container-toolkit#1249 (https://github.com/NVIDIA/nvidia-container-toolkit/issues/1249). Use single-GPU hosts as a workaround.
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support