FAINT
Collection
Fast, Appearance-Invariant Navigation Transformer models • 2 items • Updated
Fast, Appearance-Invariant Navigation Transformer (FAINT) is a learned policy for vision-based topological navigation.
Paper | Project Page | GitHub
The FAINT-Real model uses Theia-Tiny-CDDSV as backbone, and was trained for 30 epochs with the ~1.2M samples of the datasets used in the GNM/ViNT papers.
This repo contains two versions of the trained model weights.
model_pytorch.pt: Weights-only state dict of the Pytorch model.model_torchscript.pt: A 'standalone' Torchscript model for deployment.See the main GitHub repo for details regarding input preprocessing, deployment with ROS2, and training.
Only dependency is Pytorch.
import torch
ckpt_path = 'FAINT-Real/model_torchscript.pt'
model = torch.jit.load(ckpt_path)
Need to have the FAINT library installed.
import torch
from faint.common.models.faint import FAINT
ckpt_path = 'FAINT-Real/model_pytorch.pt'
state_dict = torch.load(ckpt_path)
model = FAINT() # The weights in this repo correspond to FAINT initialized with the default arguments
model.load_state_dict(state_dict)
If you use FAINT in your research, please use the following BibTeX entry:
@article{suomela2025synthetic,
title={Synthetic vs. Real Training Data for Visual Navigation},
author={Suomela, Lauri and Kuruppu Arachchige, Sasanka and Torres, German F. and Edelman, Harry and Kämäräinen, Joni-Kristian},
journal={arXiv:2509.11791},
year={2025}
}