Configuration Parsing Warning: Invalid JSON for config file config.json

FAINT

Fast, Appearance-Invariant Navigation Transformer (FAINT) is a learned policy for vision-based topological navigation.

Paper | Project Page | GitHub

Model Details

The FAINT-Real model uses Theia-Tiny-CDDSV as backbone, and was trained for 30 epochs with the ~1.2M samples of the datasets used in the GNM/ViNT papers.

This repo contains two versions of the trained model weights.

  • model_pytorch.pt: Weights-only state dict of the Pytorch model.
  • model_torchscript.pt: A 'standalone' Torchscript model for deployment.

Usage

See the main GitHub repo for details regarding input preprocessing, deployment with ROS2, and training.

Torchscript

Only dependency is Pytorch.

import torch
ckpt_path = 'FAINT-Real/model_torchscript.pt'
model = torch.jit.load(ckpt_path)

Pytorch

Need to have the FAINT library installed.

import torch
from faint.common.models.faint import FAINT

ckpt_path = 'FAINT-Real/model_pytorch.pt'
state_dict = torch.load(ckpt_path)

model = FAINT() # The weights in this repo correspond to FAINT initialized with the default arguments
model.load_state_dict(state_dict)

Citation

If you use FAINT in your research, please use the following BibTeX entry:

@article{suomela2025synthetic,
  title={Synthetic vs. Real Training Data for Visual Navigation},
  author={Suomela, Lauri and Kuruppu Arachchige, Sasanka and Torres, German F. and Edelman, Harry and Kämäräinen, Joni-Kristian},
  journal={arXiv:2509.11791},
  year={2025}
}
Downloads last month
-
Video Preview
loading

Collection including lauriasuo/FAINT-Real

Paper for lauriasuo/FAINT-Real