Add model card and metadata
#1
by
nielsr
HF Staff
- opened
README.md
ADDED
|
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pipeline_tag: image-segmentation
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# Annotation Free Spacecraft Detection and Segmentation using Vision Language Models
|
| 6 |
+
|
| 7 |
+
This repository contains the artifacts for the research presented in the paper [Annotation Free Spacecraft Detection and Segmentation using Vision Language Models](https://huggingface.co/papers/2602.04699).
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
Vision Language Models (VLMs) have demonstrated remarkable performance in open-world zero-shot visual recognition. However, their potential in space-related applications remains largely unexplored. In the space domain, accurate manual annotation is particularly challenging due to factors such as low visibility, illumination variations, and object blending with planetary backgrounds. Developing methods that can detect and segment spacecraft and orbital targets without requiring extensive manual labeling is therefore of critical importance. In this work, we propose an annotation-free detection and segmentation pipeline for space targets using VLMs. Our approach begins by automatically generating pseudo-labels for a small subset of unlabeled real data with a pre-trained VLM. These pseudo-labels are then leveraged in a teacher-student label distillation framework to train lightweight models. Despite the inherent noise in the pseudo-labels, the distillation process leads to substantial performance gains over direct zero-shot VLM inference. Experimental evaluations on the SPARK-2024, SPEED+, and TANGO datasets on segmentation tasks demonstrate consistent improvements in average precision (AP) by up to 10 points.
|
| 11 |
+
|
| 12 |
+
## Resources
|
| 13 |
+
- **Paper:** [Annotation Free Spacecraft Detection and Segmentation using Vision Language Models](https://huggingface.co/papers/2602.04699)
|
| 14 |
+
- **Code:** [GitHub Repository](https://github.com/giddyyupp/annotation-free-spacecraft-segmentation)
|
| 15 |
+
|
| 16 |
+
## Citation
|
| 17 |
+
```bibtex
|
| 18 |
+
@inproceedings{hicsonmez2026afss,
|
| 19 |
+
title={Annotation Free Spacecraft Detection and Segmentation using Vision Language Models},
|
| 20 |
+
author={Samet Hicsonmez, Jose Sosa, Dan Pineau, Inder Pal Singh, Arunkumar Rathinam, Abd El Rahman Shabayek, Djamila Aouada},
|
| 21 |
+
year={2026},
|
| 22 |
+
booktitle={IEEE International Conference on Robotics and Automation (ICRA)}
|
| 23 |
+
}
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
## Acknowledgement
|
| 27 |
+
This codebase is largely built upon [Grounded SAM-2](https://github.com/IDEA-Research/Grounded-SAM-2).
|