Table Detection YOLOv26n (TableBank)

Model Details

This model is a YOLOv26n-based detector trained to locate tables in document images.
It was trained with a simple setup (no hyperparameter tuning or model-size changes) and already gives strong results. Further gains are likely possible with more tuning.

Training Data

  • Dataset: TableBank (document table images)

Training Procedure

  • Architecture: YOLOv26n
  • Epochs: ~10 (training logs include 9 epochs)
  • Notes: Minimal tuning; default-style training was sufficient for good results.

Results (from results.csv)

Final logged epoch (epoch 9) validation metrics:

  • Precision: 0.9519
  • Recall: 0.9604
  • mAP@0.5: 0.9865
  • mAP@0.5:0.95: 0.9716

Intended Use

  • Detecting tables in scanned or digital document images.

Limitations

  • Trained only on TableBank; performance may drop on very different layouts or document styles.
  • No hyperparameter tuning performed yet; results can likely be improved.

How to Use

Load with Ultralytics YOLO:

from ultralytics import YOLO
model = YOLO("yolo26n-tablebank.pt")
results = model("path/to/image.jpg")
results[0].show()


---
license: mit
---
Downloads last month
59
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Space using walidhadri/table-detection-yolo26n 1