wirthual commited on
Commit
289cae9
·
1 Parent(s): 46a5801

initial commit of files

Browse files
Files changed (1) hide show
  1. README.md +134 -0
README.md CHANGED
@@ -1,3 +1,137 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - executorch
5
+ - object-detection
6
+ - vision
7
+ - YOLO
8
+ - anchor-free
9
+ - pytorch
10
+ datasets:
11
+ - coco
12
+ metrics:
13
+ - mAP
14
  ---
15
+ # YOLOX models for executorch
16
+
17
+ YOLOX models trained on COCO object detection (118k annotated images) at resolution 640x640. It was introduced in the paper [YOLOX: Exceeding YOLO Series in 2021](https://arxiv.org/abs/2107.08430) by Zheng Ge et al. and first released in [this repository](https://github.com/Megvii-BaseDetection/YOLOX).
18
+
19
+ The models in this repo have been exported to use with [executorch](https://github.com/pytorch/executorch)
20
+
21
+
22
+ Here is an example of detections created with YOLOX nano and the executorch runtime:
23
+
24
+ ![Example image with a dog and a bike](./example_output.png "Example image")
25
+
26
+
27
+ The models are exported from the following standard models trained on COCO:
28
+
29
+ #### Standard Models.
30
+
31
+ |Model |size |mAP<sup>val<br>0.5:0.95 |mAP<sup>test<br>0.5:0.95 | Speed V100<br>(ms) | Params<br>(M) |FLOPs<br>(G)| weights |
32
+ | ------ |:---: | :---: | :---: |:---: |:---: | :---: | :----: |
33
+ |[YOLOX-s](./exps/default/yolox_s.py) |640 |40.5 |40.5 |9.8 |9.0 | 26.8 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_s.pth) |
34
+ |[YOLOX-m](./exps/default/yolox_m.py) |640 |46.9 |47.2 |12.3 |25.3 |73.8| [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_m.pth) |
35
+ |[YOLOX-l](./exps/default/yolox_l.py) |640 |49.7 |50.1 |14.5 |54.2| 155.6 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_l.pth) |
36
+ |[YOLOX-x](./exps/default/yolox_x.py) |640 |51.1 |**51.5** | 17.3 |99.1 |281.9 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_x.pth) |
37
+ |[YOLOX-Darknet53](./exps/default/yolov3.py) |640 | 47.7 | 48.0 | 11.1 |63.7 | 185.3 | [github](https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_darknet.pth) |
38
+
39
+ # How to use
40
+
41
+ The models have been exported using the code from this [PR](https://github.com/Megvii-BaseDetection/YOLOX/pull/1860). It includes instructions on how to export your model so it can be executed using the executorch runtime.
42
+
43
+
44
+ Example code on how to run inference:
45
+ ```python
46
+ import cv2
47
+ from executorch.runtime import Runtime
48
+
49
+ input_shape = (640,640) # (416,416) for tiny and nano
50
+ origin_img = cv2.imread("path/to/your/image.png")
51
+ img = cv2.resize(origin_img, input_shape)
52
+
53
+ runtime = Runtime.get()
54
+ method = runtime.load_program("path/to/model/yolox_s.pte").load_method("forward")
55
+
56
+ output = method.execute([torch.from_numpy(img).unsqueeze(0)])
57
+ output = [o.numpy() for o in output]
58
+
59
+ # Add postprocessing like NMS to transform to bounding boxes
60
+ ```
61
+
62
+
63
+ # How to export and use your own YOLOX model
64
+
65
+ Install the YOLOX project from [here](https://github.com/Megvii-BaseDetection/YOLOX) and follow these instructions:
66
+
67
+ ### Step1: Install executorch
68
+
69
+ run the following command to install onnxruntime:
70
+ ```shell
71
+ pip install executorch
72
+ ```
73
+
74
+ #### Convert Your Model to Executorch
75
+
76
+ First, you should move to <YOLOX_HOME> by:
77
+ ```shell
78
+ cd <YOLOX_HOME>
79
+ ```
80
+ Then, you can:
81
+
82
+ 1. Convert a standard YOLOX model by -n:
83
+ ```shell
84
+ python3 tools/export_executorch.py --output-name yolox_s.pte -n yolox-s -c yolox_s.pth
85
+ ```
86
+ Notes:
87
+ * -n: specify a model name. The model name must be one of the [yolox-s,m,l,x and yolox-nano, yolox-tiny, yolov3]
88
+ * -c: the model you have trained
89
+ * To customize an input shape for onnx model, modify the following code in tools/export_executorch.py:
90
+
91
+ ```python
92
+ dummy_input = torch.randn(1, 3, exp.test_size[0], exp.test_size[1])
93
+ ```
94
+
95
+ 1. Convert a standard YOLOX model by -f. When using -f, the above command is equivalent to:
96
+
97
+ ```shell
98
+ python3 tools/export_executorch.py --output-name yolox_s.pte -f exps/default/yolox_s.py -c yolox_s.pth
99
+ ```
100
+
101
+ 3. To convert your customized model, please use -f:
102
+
103
+ ```shell
104
+ python3 tools/export_executorch.py --output-name your_yolox.pte -f exps/your_dir/your_yolox.py -c your_yolox.pth
105
+ ```
106
+
107
+ ### Step3: Executorch Runtime Demo
108
+
109
+ Step1.
110
+ ```shell
111
+ cd <YOLOX_HOME>/demo/executorch
112
+ ```
113
+
114
+ Step2.
115
+ ```shell
116
+ python3 executorch_inference.py -m <EXECUTORCH_MODEL_PATH> -i <IMAGE_PATH> -o <OUTPUT_DIR> -s 0.3 --input_shape 640,640
117
+ ```
118
+ Notes:
119
+ * -m: your converted pte model
120
+ * -i: input_image
121
+ * -s: score threshold for visualization.
122
+ * --input_shape: should be consistent with the shape you used for executorch convertion.
123
+
124
+
125
+
126
+ ## Cite YOLOX
127
+ If you use YOLOX in your research, please cite our work by using the following BibTeX entry:
128
+
129
+ ```latex
130
+ @article{yolox2021,
131
+ title={YOLOX: Exceeding YOLO Series in 2021},
132
+ author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
133
+ journal={arXiv preprint arXiv:2107.08430},
134
+ year={2021}
135
+ }
136
+
137
+ [ImageTag]: ./example_output.png