TinyDETR

sample

Introduction TinyDETR for PeopleDetection what is TinyDETR ? inspired by original DETR TinyDETR present as small object detection transformers model but why we calling is as TinyDETR ? the model is very small with (6.mb) size even the float numeric is fp32. but how we can creating DETR with Tiny size without Quantile scale model ? in TinyDETR we create MobileNet Style as backbone, we make backbone supported MultiScale Object mechanism in here use ( p1,p2,p3). we not applying Encoder Transformers because have risk lossing MultiScale information form Backbone, so here we treat Backbone as Encoder to.

Model Detail:

MobileNetFpn = > Decoder Transformers

  • type: Hybrid Transformers (MobileNetFPn + Decoder Transformers )
  • task: people Detection
  • input_size: 240p (H= 240,W=240)
  • device: cpu only or low resource computer
  • size: 6.mb
  • numeric: float 32

Model Loss training plot:

loss

How to use:

  from TinyDETR import DETR
  import torch 
  model = DETR()
  checkpoint = torch.load("TinyDETR.pth")
  model.load_state_dict(checkpoint)

  with torch.no_grad():
    x = model(image)
    print(x['class_pred'])
    print(x['bbox_pred'])

model created by: Candra Alpin Gunawan

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support