RIFE v4.9 β€” MLX

Native MLX port of RIFE v4.9 (Real-Time Intermediate Flow Estimation) for video frame interpolation on Apple Silicon.

Converted from the Practical-RIFE PyTorch checkpoint. Runs on Metal GPU via MLX β€” no ONNX Runtime or CoreML compilation required.

Architecture

IFNet v4.7 code path: coarse-to-fine optical flow estimation with 4 IFBlocks, backward warp, and ensemble (TTA).

Property Value
Parameters 5.33M
Weights format safetensors (fp32)
Input Two RGB frames [N, H, W, 3] + timestep float
Output Interpolated RGB frame [N, H, W, 3]
Normalization [0, 1]
Padding Input must be padded to multiple of 64
Ensemble Enabled by default (flip-and-average TTA)

Block Configuration

Block Scale Channels
0 8 192
1 4 128
2 2 96
3 1 64

Usage

from cortex_rife import load_model, interpolate_pair
import numpy as np

model = load_model()  # downloads and caches from HuggingFace

# BGR uint8 frames (e.g. from cv2.imread)
img0 = np.random.randint(0, 255, (720, 1280, 3), dtype=np.uint8)
img1 = np.random.randint(0, 255, (720, 1280, 3), dtype=np.uint8)

result = interpolate_pair(model, img0, img1, timestep=0.5)
# result: (720, 1280, 3) uint8 BGR

Citation

@inproceedings{huang2022rife,
  title={Real-Time Intermediate Flow Estimation for Video Frame Interpolation},
  author={Huang, Zhewei and Zhang, Tianyuan and Heng, Wen and Shi, Boxin and Zhou, Shuchang},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  year={2022}
}

License

MIT β€” same as the original RIFE implementation.

Downloads last month
45
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support