YOLOv10n TensorFlow Lite Model (Edge AI Ready)

This repository provides TensorFlow Lite versions of YOLOv10n optimized for Edge AI deployment, especially on Qualcomm Snapdragon platforms using TFLite delegates such as:

  • QNN NPU
  • GPU delegate
  • CPU fallback

These models are suitable for:

  • Android AI applications
  • Embedded Linux AI pipelines
  • Real-time object detection systems
  • Qualcomm Edge AI platforms (QCS8550 / RB-series / Snapdragon X Elite)

Example: Android Deployment (TFLite)

Typical delegate priority:

QNN_NPU โ†’ GPU โ†’ CPU

Example initialization:

Interpreter.Options options = new Interpreter.Options();
options.setNumThreads(4);

Recommended delegates:

QNN delegate (preferred)
GPU delegate
NNAPI delegate

Performance Target Scenario

Designed for real-time Edge AI applications such as:

  • USB camera detection
  • IP camera detection
  • multi-stream video analytics
  • smart surveillance
  • robotics perception pipelines

Validated usage scenario:

1080p video input
Real-time inference pipeline
Qualcomm NPU acceleration
Android TextureView rendering pipeline

Tested Platforms (Recommended)

This model is suitable for:

  • Qualcomm QCS8550
  • Qualcomm RB-series platforms
  • Snapdragon X Elite
  • Snapdragon 8 Gen series
  • Android Edge AI devices
Downloads last month
36
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for anan19990108/yolov10n_tflite

Finetuned
(143)
this model