Sapsan-VFI / README.md
SnJake's picture
Update README.md
8329186 verified
metadata
license: mit
pipeline_tag: video-to-video
library_name: pytorch
tags:
  - computer-vision
  - video
  - video-frame-interpolation
  - vfi
  - video-to-video
  - comfyui
  - pytorch

SnJake Sapsan-VFI

Sapsan-VFI is a x2 frame interpolation model for video. It inserts a single middle frame between every input pair, effectively doubling the FPS.

Examples

How to use in ComfyUI

The model is designed to work with the Sapsan-VFI ComfyUI node.

  1. Install the node from the GitHub repo.
  2. Download the weights from this repository.
  3. Place the file(s) into ComfyUI/models/sapsan_vfi/.
  4. Select the weights in the node dropdown and run the workflow.

Recommended workflow:

Example workflow can be found in Example Workflow folder in GitHub repo.

Notes:

  • The node has a console_progress toggle to print progress in the ComfyUI console.

Weights

  • Sapsan-VFI.safetensors
  • Sapsan-VFI.pt

Training Details

  • Created out of curiosity and personal interest.
  • Total epochs: 11
  • Dataset: 2700 videos
  • Shards: 151 shards of 1000 shadrs in each. 151 000 triplets.

Training code is included in training_code/ for reference.

Disclaimer

This project was made purely for curiosity and personal interest. The code was written by GPT-5.2 Codex.