LitePT / README.md
yuanwenyue's picture
Update README.md
6055473 verified
metadata
license: mit
pipeline_tag: image-segmentation
library_name: pytorch
tags:
  - point-cloud
  - point-cloud-backbone
  - graph-learning
  - pytorch
authors:
  - Yuanwen Yue
  - Damien Robert
  - Jianyuan Wang
  - Sunghwan Hong
  - Jan Dirk Wegner
  - Christian Rupprecht
  - Konrad Schindler

This repository contains model weights for LitePT: Lighter Yet Stronger Point Transformer, a lightweight, high-performance 3D point cloud architecture.

LitePT embodies the simple principle "convolutions for low-level geometry, attention for high-level relations" and strategically places only the required operations at each hierarchy level. LitePT is equipped with a novel, parameter-free 3D positional encoding, PointROPE. The resulting model achieves state-of-the-art performance while being significantly more efficient.

Paper & Resources

Models

We release the pretrained model weights for the benchmarks we reported in our paper.

Semantic segmentation

Model Params Benchmark val mIoU Config Checkpoint
LitePT-S 12.7M NuScenes 82.2 link Download
LitePT-S 12.7M Waymo 73.1 link Download
LitePT-S 12.7M ScanNet 76.5 link Download
LitePT-S 12.7M Structured3D 83.6 link Download
LitePT-B 45.1M Structured3D 85.1 link Download
LitePT-L 85.9M Structured3D 85.4 link Download

Instance segmentation

Model Params Benchmark mAP25 mAP50 mAP Config Checkpoint
LitePT-S* 16.0M ScanNet 78.5 64.9 41.7 link Download
LitePT-S* 16.0M ScanNet200 40.3 33.1 22.2 link Download

Object detection

Model Params Benchmark mAPH Config Checkpoint
LitePT 9.0M Waymo 70.7 link Download

Citation

@article{yuelitept2025,
    title={{LitePT: Lighter Yet Stronger Point Transformer}},
    author={Yue, Yuanwen and Robert, Damien and Wang, Jianyuan and Hong, Sunghwan and Wegner, Jan Dirk and Rupprecht, Christian and Schindler, Konrad},
    journal={arXiv preprint arXiv:2512.13689},
    year={2025}
}