Add_Yolov4_onnx_model

#5
by AyushK07 - opened
yolov4/LICENSE ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ Apache License
3
+ Version 2.0, January 2004
4
+ http://www.apache.org/licenses/
5
+
6
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7
+
8
+ 1. Definitions.
9
+
10
+ "License" shall mean the terms and conditions for use, reproduction,
11
+ and distribution as defined by Sections 1 through 9 of this document.
12
+
13
+ "Licensor" shall mean the copyright owner or entity authorized by
14
+ the copyright owner that is granting the License.
15
+
16
+ "Legal Entity" shall mean the union of the acting entity and all
17
+ other entities that control, are controlled by, or are under common
18
+ control with that entity. For the purposes of this definition,
19
+ "control" means (i) the power, direct or indirect, to cause the
20
+ direction or management of such entity, whether by contract or
21
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
22
+ outstanding shares, or (iii) beneficial ownership of such entity.
23
+
24
+ "You" (or "Your") shall mean an individual or Legal Entity
25
+ exercising permissions granted by this License.
26
+
27
+ "Source" form shall mean the preferred form for making modifications,
28
+ including but not limited to software source code, documentation
29
+ source, and configuration files.
30
+
31
+ "Object" form shall mean any form resulting from mechanical
32
+ transformation or translation of a Source form, including but
33
+ not limited to compiled object code, generated documentation,
34
+ and conversions to other media types.
35
+
36
+ "Work" shall mean the work of authorship, whether in Source or
37
+ Object form, made available under the License, as indicated by a
38
+ copyright notice that is included in or attached to the work
39
+ (an example is provided in the Appendix below).
40
+
41
+ "Derivative Works" shall mean any work, whether in Source or Object
42
+ form, that is based on (or derived from) the Work and for which the
43
+ editorial revisions, annotations, elaborations, or other modifications
44
+ represent, as a whole, an original work of authorship. For the purposes
45
+ of this License, Derivative Works shall not include works that remain
46
+ separable from, or merely link (or bind by name) to the interfaces of,
47
+ the Work and Derivative Works thereof.
48
+
49
+ "Contribution" shall mean any work of authorship, including
50
+ the original version of the Work and any modifications or additions
51
+ to that Work or Derivative Works thereof, that is intentionally
52
+ submitted to Licensor for inclusion in the Work by the copyright owner
53
+ or by an individual or Legal Entity authorized to submit on behalf of
54
+ the copyright owner. For the purposes of this definition, "submitted"
55
+ means any form of electronic, verbal, or written communication sent
56
+ to the Licensor or its representatives, including but not limited to
57
+ communication on electronic mailing lists, source code control systems,
58
+ and issue tracking systems that are managed by, or on behalf of, the
59
+ Licensor for the purpose of discussing and improving the Work, but
60
+ excluding communication that is conspicuously marked or otherwise
61
+ designated in writing by the copyright owner as "Not a Contribution."
62
+
63
+ "Contributor" shall mean Licensor and any individual or Legal Entity
64
+ on behalf of whom a Contribution has been received by Licensor and
65
+ subsequently incorporated within the Work.
66
+
67
+ 2. Grant of Copyright License. Subject to the terms and conditions of
68
+ this License, each Contributor hereby grants to You a perpetual,
69
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70
+ copyright license to reproduce, prepare Derivative Works of,
71
+ publicly display, publicly perform, sublicense, and distribute the
72
+ Work and such Derivative Works in Source or Object form.
73
+
74
+ 3. Grant of Patent License. Subject to the terms and conditions of
75
+ this License, each Contributor hereby grants to You a perpetual,
76
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77
+ (except as stated in this section) patent license to make, have made,
78
+ use, offer to sell, sell, import, and otherwise transfer the Work,
79
+ where such license applies only to those patent claims licensable
80
+ by such Contributor that are necessarily infringed by their
81
+ Contribution(s) alone or by combination of their Contribution(s)
82
+ with the Work to which such Contribution(s) was submitted. If You
83
+ institute patent litigation against any entity (including a
84
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
85
+ or a Contribution incorporated within the Work constitutes direct
86
+ or contributory patent infringement, then any patent licenses
87
+ granted to You under this License for that Work shall terminate
88
+ as of the date such litigation is filed.
89
+
90
+ 4. Redistribution. You may reproduce and distribute copies of the
91
+ Work or Derivative Works thereof in any medium, with or without
92
+ modifications, and in Source or Object form, provided that You
93
+ meet the following conditions:
94
+
95
+ (a) You must give any other recipients of the Work or
96
+ Derivative Works a copy of this License; and
97
+
98
+ (b) You must cause any modified files to carry prominent notices
99
+ stating that You changed the files; and
100
+
101
+ (c) You must retain, in the Source form of any Derivative Works
102
+ that You distribute, all copyright, patent, trademark, and
103
+ attribution notices from the Source form of the Work,
104
+ excluding those notices that do not pertain to any part of
105
+ the Derivative Works; and
106
+
107
+ (d) If the Work includes a "NOTICE" text file as part of its
108
+ distribution, then any Derivative Works that You distribute must
109
+ include a readable copy of the attribution notices contained
110
+ within such NOTICE file, excluding those notices that do not
111
+ pertain to any part of the Derivative Works, in at least one
112
+ of the following places: within a NOTICE text file distributed
113
+ as part of the Derivative Works; within the Source form or
114
+ documentation, if provided along with the Derivative Works; or,
115
+ within a display generated by the Derivative Works, if and
116
+ wherever such third-party notices normally appear. The contents
117
+ of the NOTICE file are for informational purposes only and
118
+ do not modify the License. You may add Your own attribution
119
+ notices within Derivative Works that You distribute, alongside
120
+ or as an addendum to the NOTICE text from the Work, provided
121
+ that such additional attribution notices cannot be construed
122
+ as modifying the License.
123
+
124
+ You may add Your own copyright statement to Your modifications and
125
+ may provide additional or different license terms and conditions
126
+ for use, reproduction, or distribution of Your modifications, or
127
+ for any such Derivative Works as a whole, provided Your use,
128
+ reproduction, and distribution of the Work otherwise complies with
129
+ the conditions stated in this License.
130
+
131
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
132
+ any Contribution intentionally submitted for inclusion in the Work
133
+ by You to the Licensor shall be under the terms and conditions of
134
+ this License, without any additional terms or conditions.
135
+ Notwithstanding the above, nothing herein shall supersede or modify
136
+ the terms of any separate license agreement you may have executed
137
+ with Licensor regarding such Contributions.
138
+
139
+ 6. Trademarks. This License does not grant permission to use the trade
140
+ names, trademarks, service marks, or product names of the Licensor,
141
+ except as required for reasonable and customary use in describing the
142
+ origin of the Work and reproducing the content of the NOTICE file.
143
+
144
+ 7. Disclaimer of Warranty. Unless required by applicable law or
145
+ agreed to in writing, Licensor provides the Work (and each
146
+ Contributor provides its Contributions) on an "AS IS" BASIS,
147
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148
+ implied, including, without limitation, any warranties or conditions
149
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150
+ PARTICULAR PURPOSE. You are solely responsible for determining the
151
+ appropriateness of using or redistributing the Work and assume any
152
+ risks associated with Your exercise of permissions under this License.
153
+
154
+ 8. Limitation of Liability. In no event and under no legal theory,
155
+ whether in tort (including negligence), contract, or otherwise,
156
+ unless required by applicable law (such as deliberate and grossly
157
+ negligent acts) or agreed to in writing, shall any Contributor be
158
+ liable to You for damages, including any direct, indirect, special,
159
+ incidental, or consequential damages of any character arising as a
160
+ result of this License or out of the use or inability to use the
161
+ Work (including but not limited to damages for loss of goodwill,
162
+ work stoppage, computer failure or malfunction, or any and all
163
+ other commercial damages or losses), even if such Contributor
164
+ has been advised of the possibility of such damages.
165
+
166
+ 9. Accepting Warranty or Additional Liability. While redistributing
167
+ the Work or Derivative Works thereof, You may choose to offer,
168
+ and charge a fee for, acceptance of support, warranty, indemnity,
169
+ or other liability obligations and/or rights consistent with this
170
+ License. However, in accepting such obligations, You may act only
171
+ on Your own behalf and on Your sole responsibility, not on behalf
172
+ of any other Contributor, and only if You agree to indemnify,
173
+ defend, and hold each Contributor harmless for any liability
174
+ incurred by, or claims asserted against, such Contributor by reason
175
+ of your accepting any such warranty or additional liability.
176
+
177
+ END OF TERMS AND CONDITIONS
178
+
179
+ APPENDIX: How to apply the Apache License to your work.
180
+
181
+ To apply the Apache License to your work, attach the following
182
+ boilerplate notice, with the fields enclosed by brackets "[]"
183
+ replaced with your own identifying information. (Don't include
184
+ the brackets!) The text should be enclosed in the appropriate
185
+ comment syntax for the file format. We also recommend that a
186
+ file or class name and description of purpose be included on the
187
+ same "printed page" as the copyright notice for easier
188
+ identification within third-party archives.
189
+
190
+ Copyright [yyyy] [name of copyright owner]
191
+
192
+ Licensed under the Apache License, Version 2.0 (the "License");
193
+ you may not use this file except in compliance with the License.
194
+ You may obtain a copy of the License at
195
+
196
+ http://www.apache.org/licenses/LICENSE-2.0
197
+
198
+ Unless required by applicable law or agreed to in writing, software
199
+ distributed under the License is distributed on an "AS IS" BASIS,
200
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201
+ See the License for the specific language governing permissions and
202
+ limitations under the License.
yolov4/README.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv4, YOLOv4-tiny and YOLOv4x-mish ONNX Conversion
2
+
3
+ ## Prerequisites
4
+
5
+ ### 1. Model files (cfg + weights)
6
+
7
+ The Darknet `.cfg` files are already provided in [opencv_extra/testdata/dnn](https://github.com/opencv/opencv_extra/tree/master/testdata/dnn) (`yolov4.cfg`, `yolov4-tiny-2020-12.cfg`, `yolov4x-mish.cfg`).
8
+
9
+ Download the matching `.weights` files using the OpenCV test data download script:
10
+
11
+ ```bash
12
+ git clone https://github.com/opencv/opencv_extra.git
13
+ cd opencv_extra/testdata/dnn
14
+ python download_models.py YOLOv4 YOLOv4-tiny-2020-12 YOLOv4x-mish
15
+ ```
16
+
17
+ ### 2. Python environment for `pytorch-YOLOv4`
18
+
19
+ The conversion uses [`pytorch-YOLOv4`](https://github.com/Tianxiaomo/pytorch-YOLOv4). Create a Python environment with the required dependencies:
20
+
21
+ Supported Python versions: **3.7 – 3.10**.
22
+
23
+ ```bash
24
+ conda create -n <env_name> python=<3.7-3.10> -y
25
+ conda activate <env_name>
26
+ pip install "torch<2.4" "torchvision<0.19" "numpy<2" onnx onnxruntime onnxscript
27
+ ```
28
+
29
+ ---
30
+
31
+ ## Conversion of YOLOv4 to ONNX
32
+
33
+ ```bash
34
+ git clone https://github.com/Tianxiaomo/pytorch-YOLOv4.git
35
+ cd pytorch-YOLOv4
36
+
37
+ # Convert (dynamic batch, batch_size=0)
38
+ python -c "from tool.darknet2onnx import transform_to_onnx; transform_to_onnx('yolov4.cfg', 'yolov4.weights', 0)"
39
+ ```
40
+
41
+ ---
42
+
43
+ ## Conversion of YOLOv4-tiny to ONNX
44
+
45
+ ```bash
46
+ git clone https://github.com/Tianxiaomo/pytorch-YOLOv4.git
47
+ cd pytorch-YOLOv4
48
+
49
+ # Convert YOLOv4-tiny (dynamic batch, batch_size=0)
50
+ python -c "from tool.darknet2onnx import transform_to_onnx; transform_to_onnx('yolov4-tiny-2020-12.cfg', 'yolov4-tiny.weights', 0)"
51
+ ```
52
+
53
+ ---
54
+
55
+ ## Conversion of YOLOv4x-mish to ONNX
56
+
57
+ ### Why it differs from YOLOv4
58
+ The `yolov4x-mish.cfg` uses `new_coords=1` — a Scaled-YOLOv4 optimization. The network is trained to output values directly in the `[0, 1]` range (no sigmoid needed) and uses a squared formula `(t_w * 2)² * anchor` for width/height instead of `exp(t_w) * anchor`. This is more numerically stable for large models.
59
+
60
+ ### The Issue
61
+ The `pytorch-YOLOv4` converter was written for the original YOLOv4 (`new_coords=0`) and always applies `sigmoid` + `exp`. With `new_coords=1` weights, `sigmoid` gets applied on top of already-activated values, squishing confidences from ~0.93 down to ~0.36. As a result, the model produces garbage detections.
62
+
63
+ ### The Fix (Modified scripts provided)
64
+ To resolve this, modified versions of **`darknet2pytorch.py`** and **`yolo_layer.py`** are provided in this repository. These scripts include the following patches:
65
+
66
+ * **`darknet2pytorch.py`**: Properly reads the `new_coords` flag from the `.cfg`.
67
+ * **`yolo_layer.py`**: Skips the redundant sigmoid activation for `xy/obj/cls` and implements the squared `wh` formula when `new_coords=1` is detected.
68
+
69
+ ### Conversion Steps
70
+
71
+ ```bash
72
+ git clone https://github.com/Tianxiaomo/pytorch-YOLOv4.git
73
+ cd pytorch-YOLOv4
74
+
75
+ # [!] Replace tool/darknet2pytorch.py and tool/yolo_layer.py with the patched
76
+ # versions from this repository before running the conversion.
77
+
78
+ # Convert YOLOv4x-mish (dynamic batch, batch_size=0)
79
+ python -c "from tool.darknet2onnx import transform_to_onnx; transform_to_onnx('yolov4x-mish.cfg', 'yolov4x-mish.weights', 0)"
80
+ ```
81
+
82
+ ---
83
+
84
+ ## Usage
85
+
86
+ A demo script is provided to run inference using OpenCV DNN:
87
+
88
+ ```bash
89
+ # YOLOv4 (input size: 608x608)
90
+ python demo.py --model yolov4.onnx --image example_outputs/input.jpg --output example_outputs/yolov4_output.jpg
91
+
92
+ # YOLOv4-tiny (input size: 416x416)
93
+ python demo.py --model yolov4-tiny.onnx --image example_outputs/input.jpg --output example_outputs/yolov4-tiny_output.jpg
94
+
95
+ # YOLOv4x-mish (input size: 640x640)
96
+ python demo.py --model yolov4x-mish.onnx --image example_outputs/input.jpg --output example_outputs/yolov4x-mish_output.jpg
97
+ ```
98
+
99
+ The demo prints the detected COCO classes, confidence scores, and bounding boxes, and saves an annotated output image.
100
+
101
+ ---
102
+
103
+ ## License
104
+
105
+ See [License.txt](./License.txt) — This conversion tool is based on [pytorch-YOLOv4](https://github.com/Tianxiaomo/pytorch-YOLOv4) (Apache-2.0). Original YOLOv4 model weights and configuration are released by Alexey Bochkovskiy ([AlexeyAB/darknet](https://github.com/AlexeyAB/darknet)).
yolov4/darknet2pytorch.py ADDED
@@ -0,0 +1,537 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch.nn as nn
2
+ import torch.nn.functional as F
3
+ import numpy as np
4
+ from tool.region_loss import RegionLoss
5
+ from tool.yolo_layer import YoloLayer
6
+ from tool.config import *
7
+ from tool.torch_utils import *
8
+
9
+
10
+ class Mish(torch.nn.Module):
11
+ def __init__(self):
12
+ super().__init__()
13
+
14
+ def forward(self, x):
15
+ x = x * (torch.tanh(torch.nn.functional.softplus(x)))
16
+ return x
17
+
18
+
19
+ class MaxPoolDark(nn.Module):
20
+ def __init__(self, size=2, stride=1):
21
+ super(MaxPoolDark, self).__init__()
22
+ self.size = size
23
+ self.stride = stride
24
+
25
+ def forward(self, x):
26
+ '''
27
+ darknet output_size = (input_size + p - k) / s +1
28
+ p : padding = k - 1
29
+ k : size
30
+ s : stride
31
+ torch output_size = (input_size + 2*p -k) / s +1
32
+ p : padding = k//2
33
+ '''
34
+ p = self.size // 2
35
+ if ((x.shape[2] - 1) // self.stride) != ((x.shape[2] + 2 * p - self.size) // self.stride):
36
+ padding1 = (self.size - 1) // 2
37
+ padding2 = padding1 + 1
38
+ else:
39
+ padding1 = (self.size - 1) // 2
40
+ padding2 = padding1
41
+ if ((x.shape[3] - 1) // self.stride) != ((x.shape[3] + 2 * p - self.size) // self.stride):
42
+ padding3 = (self.size - 1) // 2
43
+ padding4 = padding3 + 1
44
+ else:
45
+ padding3 = (self.size - 1) // 2
46
+ padding4 = padding3
47
+ x = F.max_pool2d(F.pad(x, (padding3, padding4, padding1, padding2), mode='replicate'),
48
+ self.size, stride=self.stride)
49
+ return x
50
+
51
+
52
+ class Upsample_expand(nn.Module):
53
+ def __init__(self, stride=2):
54
+ super(Upsample_expand, self).__init__()
55
+ self.stride = stride
56
+
57
+ def forward(self, x):
58
+ assert (x.data.dim() == 4)
59
+
60
+ x = x.view(x.size(0), x.size(1), x.size(2), 1, x.size(3), 1).\
61
+ expand(x.size(0), x.size(1), x.size(2), self.stride, x.size(3), self.stride).contiguous().\
62
+ view(x.size(0), x.size(1), x.size(2) * self.stride, x.size(3) * self.stride)
63
+
64
+ return x
65
+
66
+
67
+ class Upsample_interpolate(nn.Module):
68
+ def __init__(self, stride):
69
+ super(Upsample_interpolate, self).__init__()
70
+ self.stride = stride
71
+
72
+ def forward(self, x):
73
+ assert (x.data.dim() == 4)
74
+
75
+ out = F.interpolate(x, size=(x.size(2) * self.stride, x.size(3) * self.stride), mode='nearest')
76
+ return out
77
+
78
+
79
+ class Reorg(nn.Module):
80
+ def __init__(self, stride=2):
81
+ super(Reorg, self).__init__()
82
+ self.stride = stride
83
+
84
+ def forward(self, x):
85
+ stride = self.stride
86
+ assert (x.data.dim() == 4)
87
+ B = x.data.size(0)
88
+ C = x.data.size(1)
89
+ H = x.data.size(2)
90
+ W = x.data.size(3)
91
+ assert (H % stride == 0)
92
+ assert (W % stride == 0)
93
+ ws = stride
94
+ hs = stride
95
+ x = x.view(B, C, H / hs, hs, W / ws, ws).transpose(3, 4).contiguous()
96
+ x = x.view(B, C, H / hs * W / ws, hs * ws).transpose(2, 3).contiguous()
97
+ x = x.view(B, C, hs * ws, H / hs, W / ws).transpose(1, 2).contiguous()
98
+ x = x.view(B, hs * ws * C, H / hs, W / ws)
99
+ return x
100
+
101
+
102
+ class GlobalAvgPool2d(nn.Module):
103
+ def __init__(self):
104
+ super(GlobalAvgPool2d, self).__init__()
105
+
106
+ def forward(self, x):
107
+ N = x.data.size(0)
108
+ C = x.data.size(1)
109
+ H = x.data.size(2)
110
+ W = x.data.size(3)
111
+ x = F.avg_pool2d(x, (H, W))
112
+ x = x.view(N, C)
113
+ return x
114
+
115
+
116
+ # for route, shortcut and sam
117
+ class EmptyModule(nn.Module):
118
+ def __init__(self):
119
+ super(EmptyModule, self).__init__()
120
+
121
+ def forward(self, x):
122
+ return x
123
+
124
+
125
+ # support route shortcut and reorg
126
+ class Darknet(nn.Module):
127
+ def __init__(self, cfgfile, inference=False):
128
+ super(Darknet, self).__init__()
129
+ self.inference = inference
130
+ self.training = not self.inference
131
+
132
+ self.blocks = parse_cfg(cfgfile)
133
+ self.width = int(self.blocks[0]['width'])
134
+ self.height = int(self.blocks[0]['height'])
135
+
136
+ self.models = self.create_network(self.blocks) # merge conv, bn,leaky
137
+ self.loss = self.models[len(self.models) - 1]
138
+
139
+ if self.blocks[(len(self.blocks) - 1)]['type'] == 'region':
140
+ self.anchors = self.loss.anchors
141
+ self.num_anchors = self.loss.num_anchors
142
+ self.anchor_step = self.loss.anchor_step
143
+ self.num_classes = self.loss.num_classes
144
+
145
+ self.header = torch.IntTensor([0, 0, 0, 0])
146
+ self.seen = 0
147
+
148
+ def forward(self, x):
149
+ ind = -2
150
+ self.loss = None
151
+ outputs = dict()
152
+ out_boxes = []
153
+ for block in self.blocks:
154
+ ind = ind + 1
155
+ # if ind > 0:
156
+ # return x
157
+
158
+ if block['type'] == 'net':
159
+ continue
160
+ elif block['type'] in ['convolutional', 'maxpool', 'reorg', 'upsample', 'avgpool', 'softmax', 'connected']:
161
+ x = self.models[ind](x)
162
+ outputs[ind] = x
163
+ elif block['type'] == 'route':
164
+ layers = block['layers'].split(',')
165
+ layers = [int(i) if int(i) > 0 else int(i) + ind for i in layers]
166
+ if len(layers) == 1:
167
+ if 'groups' not in block.keys() or int(block['groups']) == 1:
168
+ x = outputs[layers[0]]
169
+ outputs[ind] = x
170
+ else:
171
+ groups = int(block['groups'])
172
+ group_id = int(block['group_id'])
173
+ _, b, _, _ = outputs[layers[0]].shape
174
+ x = outputs[layers[0]][:, b // groups * group_id:b // groups * (group_id + 1)]
175
+ outputs[ind] = x
176
+ elif len(layers) == 2:
177
+ x1 = outputs[layers[0]]
178
+ x2 = outputs[layers[1]]
179
+ x = torch.cat((x1, x2), 1)
180
+ outputs[ind] = x
181
+ elif len(layers) == 4:
182
+ x1 = outputs[layers[0]]
183
+ x2 = outputs[layers[1]]
184
+ x3 = outputs[layers[2]]
185
+ x4 = outputs[layers[3]]
186
+ x = torch.cat((x1, x2, x3, x4), 1)
187
+ outputs[ind] = x
188
+ else:
189
+ print("rounte number > 2 ,is {}".format(len(layers)))
190
+
191
+ elif block['type'] == 'shortcut':
192
+ from_layer = int(block['from'])
193
+ activation = block['activation']
194
+ from_layer = from_layer if from_layer > 0 else from_layer + ind
195
+ x1 = outputs[from_layer]
196
+ x2 = outputs[ind - 1]
197
+ x = x1 + x2
198
+ if activation == 'leaky':
199
+ x = F.leaky_relu(x, 0.1, inplace=True)
200
+ elif activation == 'relu':
201
+ x = F.relu(x, inplace=True)
202
+ outputs[ind] = x
203
+ elif block['type'] == 'sam':
204
+ from_layer = int(block['from'])
205
+ from_layer = from_layer if from_layer > 0 else from_layer + ind
206
+ x1 = outputs[from_layer]
207
+ x2 = outputs[ind - 1]
208
+ x = x1 * x2
209
+ outputs[ind] = x
210
+ elif block['type'] == 'region':
211
+ continue
212
+ if self.loss:
213
+ self.loss = self.loss + self.models[ind](x)
214
+ else:
215
+ self.loss = self.models[ind](x)
216
+ outputs[ind] = None
217
+ elif block['type'] == 'yolo':
218
+ # if self.training:
219
+ # pass
220
+ # else:
221
+ # boxes = self.models[ind](x)
222
+ # out_boxes.append(boxes)
223
+ boxes = self.models[ind](x)
224
+ out_boxes.append(boxes)
225
+ elif block['type'] == 'cost':
226
+ continue
227
+ else:
228
+ print('unknown type %s' % (block['type']))
229
+
230
+ if self.training:
231
+ return out_boxes
232
+ else:
233
+ return get_region_boxes(out_boxes)
234
+
235
+ def print_network(self):
236
+ print_cfg(self.blocks)
237
+
238
+ def create_network(self, blocks):
239
+ models = nn.ModuleList()
240
+
241
+ prev_filters = 3
242
+ out_filters = []
243
+ prev_stride = 1
244
+ out_strides = []
245
+ conv_id = 0
246
+ for block in blocks:
247
+ if block['type'] == 'net':
248
+ prev_filters = int(block['channels'])
249
+ continue
250
+ elif block['type'] == 'convolutional':
251
+ conv_id = conv_id + 1
252
+ batch_normalize = int(block['batch_normalize'])
253
+ filters = int(block['filters'])
254
+ kernel_size = int(block['size'])
255
+ stride = int(block['stride'])
256
+ is_pad = int(block['pad'])
257
+ pad = (kernel_size - 1) // 2 if is_pad else 0
258
+ activation = block['activation']
259
+ model = nn.Sequential()
260
+ if batch_normalize:
261
+ model.add_module('conv{0}'.format(conv_id),
262
+ nn.Conv2d(prev_filters, filters, kernel_size, stride, pad, bias=False))
263
+ model.add_module('bn{0}'.format(conv_id), nn.BatchNorm2d(filters))
264
+ # model.add_module('bn{0}'.format(conv_id), BN2d(filters))
265
+ else:
266
+ model.add_module('conv{0}'.format(conv_id),
267
+ nn.Conv2d(prev_filters, filters, kernel_size, stride, pad))
268
+ if activation == 'leaky':
269
+ model.add_module('leaky{0}'.format(conv_id), nn.LeakyReLU(0.1, inplace=True))
270
+ elif activation == 'relu':
271
+ model.add_module('relu{0}'.format(conv_id), nn.ReLU(inplace=True))
272
+ elif activation == 'mish':
273
+ model.add_module('mish{0}'.format(conv_id), Mish())
274
+ elif activation == 'linear':
275
+ model.add_module('linear{0}'.format(conv_id), nn.Identity())
276
+ elif activation == 'logistic':
277
+ model.add_module('sigmoid{0}'.format(conv_id), nn.Sigmoid())
278
+ else:
279
+ print("No convolutional activation named {}".format(activation))
280
+
281
+ prev_filters = filters
282
+ out_filters.append(prev_filters)
283
+ prev_stride = stride * prev_stride
284
+ out_strides.append(prev_stride)
285
+ models.append(model)
286
+ elif block['type'] == 'maxpool':
287
+ pool_size = int(block['size'])
288
+ stride = int(block['stride'])
289
+ if stride == 1 and pool_size % 2:
290
+ # You can use Maxpooldark instead, here is convenient to convert onnx.
291
+ # Example: [maxpool] size=3 stride=1
292
+ model = nn.MaxPool2d(kernel_size=pool_size, stride=stride, padding=pool_size // 2)
293
+ elif stride == pool_size:
294
+ # You can use Maxpooldark instead, here is convenient to convert onnx.
295
+ # Example: [maxpool] size=2 stride=2
296
+ model = nn.MaxPool2d(kernel_size=pool_size, stride=stride, padding=0)
297
+ else:
298
+ model = MaxPoolDark(pool_size, stride)
299
+ out_filters.append(prev_filters)
300
+ prev_stride = stride * prev_stride
301
+ out_strides.append(prev_stride)
302
+ models.append(model)
303
+ elif block['type'] == 'avgpool':
304
+ model = GlobalAvgPool2d()
305
+ out_filters.append(prev_filters)
306
+ models.append(model)
307
+ elif block['type'] == 'softmax':
308
+ model = nn.Softmax()
309
+ out_strides.append(prev_stride)
310
+ out_filters.append(prev_filters)
311
+ models.append(model)
312
+ elif block['type'] == 'cost':
313
+ if block['_type'] == 'sse':
314
+ model = nn.MSELoss(reduction='mean')
315
+ elif block['_type'] == 'L1':
316
+ model = nn.L1Loss(reduction='mean')
317
+ elif block['_type'] == 'smooth':
318
+ model = nn.SmoothL1Loss(reduction='mean')
319
+ out_filters.append(1)
320
+ out_strides.append(prev_stride)
321
+ models.append(model)
322
+ elif block['type'] == 'reorg':
323
+ stride = int(block['stride'])
324
+ prev_filters = stride * stride * prev_filters
325
+ out_filters.append(prev_filters)
326
+ prev_stride = prev_stride * stride
327
+ out_strides.append(prev_stride)
328
+ models.append(Reorg(stride))
329
+ elif block['type'] == 'upsample':
330
+ stride = int(block['stride'])
331
+ out_filters.append(prev_filters)
332
+ prev_stride = prev_stride // stride
333
+ out_strides.append(prev_stride)
334
+
335
+ models.append(Upsample_expand(stride))
336
+ # models.append(Upsample_interpolate(stride))
337
+
338
+ elif block['type'] == 'route':
339
+ layers = block['layers'].split(',')
340
+ ind = len(models)
341
+ layers = [int(i) if int(i) > 0 else int(i) + ind for i in layers]
342
+ if len(layers) == 1:
343
+ if 'groups' not in block.keys() or int(block['groups']) == 1:
344
+ prev_filters = out_filters[layers[0]]
345
+ prev_stride = out_strides[layers[0]]
346
+ else:
347
+ prev_filters = out_filters[layers[0]] // int(block['groups'])
348
+ prev_stride = out_strides[layers[0]] // int(block['groups'])
349
+ elif len(layers) == 2:
350
+ assert (layers[0] == ind - 1 or layers[1] == ind - 1)
351
+ prev_filters = out_filters[layers[0]] + out_filters[layers[1]]
352
+ prev_stride = out_strides[layers[0]]
353
+ elif len(layers) == 4:
354
+ assert (layers[0] == ind - 1)
355
+ prev_filters = out_filters[layers[0]] + out_filters[layers[1]] + out_filters[layers[2]] + \
356
+ out_filters[layers[3]]
357
+ prev_stride = out_strides[layers[0]]
358
+ else:
359
+ print("route error!!!")
360
+
361
+ out_filters.append(prev_filters)
362
+ out_strides.append(prev_stride)
363
+ models.append(EmptyModule())
364
+ elif block['type'] == 'shortcut':
365
+ ind = len(models)
366
+ prev_filters = out_filters[ind - 1]
367
+ out_filters.append(prev_filters)
368
+ prev_stride = out_strides[ind - 1]
369
+ out_strides.append(prev_stride)
370
+ models.append(EmptyModule())
371
+ elif block['type'] == 'sam':
372
+ ind = len(models)
373
+ prev_filters = out_filters[ind - 1]
374
+ out_filters.append(prev_filters)
375
+ prev_stride = out_strides[ind - 1]
376
+ out_strides.append(prev_stride)
377
+ models.append(EmptyModule())
378
+ elif block['type'] == 'connected':
379
+ filters = int(block['output'])
380
+ if block['activation'] == 'linear':
381
+ model = nn.Linear(prev_filters, filters)
382
+ elif block['activation'] == 'leaky':
383
+ model = nn.Sequential(
384
+ nn.Linear(prev_filters, filters),
385
+ nn.LeakyReLU(0.1, inplace=True))
386
+ elif block['activation'] == 'relu':
387
+ model = nn.Sequential(
388
+ nn.Linear(prev_filters, filters),
389
+ nn.ReLU(inplace=True))
390
+ prev_filters = filters
391
+ out_filters.append(prev_filters)
392
+ out_strides.append(prev_stride)
393
+ models.append(model)
394
+ elif block['type'] == 'region':
395
+ loss = RegionLoss()
396
+ anchors = block['anchors'].split(',')
397
+ loss.anchors = [float(i) for i in anchors]
398
+ loss.num_classes = int(block['classes'])
399
+ loss.num_anchors = int(block['num'])
400
+ loss.anchor_step = len(loss.anchors) // loss.num_anchors
401
+ loss.object_scale = float(block['object_scale'])
402
+ loss.noobject_scale = float(block['noobject_scale'])
403
+ loss.class_scale = float(block['class_scale'])
404
+ loss.coord_scale = float(block['coord_scale'])
405
+ out_filters.append(prev_filters)
406
+ out_strides.append(prev_stride)
407
+ models.append(loss)
408
+ elif block['type'] == 'yolo':
409
+ yolo_layer = YoloLayer()
410
+ anchors = block['anchors'].split(',')
411
+ anchor_mask = block['mask'].split(',')
412
+ yolo_layer.anchor_mask = [int(i) for i in anchor_mask]
413
+ yolo_layer.anchors = [float(i) for i in anchors]
414
+ yolo_layer.num_classes = int(block['classes'])
415
+ self.num_classes = yolo_layer.num_classes
416
+ yolo_layer.num_anchors = int(block['num'])
417
+ yolo_layer.anchor_step = len(yolo_layer.anchors) // yolo_layer.num_anchors
418
+ yolo_layer.stride = prev_stride
419
+ yolo_layer.scale_x_y = float(block['scale_x_y'])
420
+ yolo_layer.new_coords = int(block.get('new_coords', 0))
421
+ # yolo_layer.object_scale = float(block['object_scale'])
422
+ # yolo_layer.noobject_scale = float(block['noobject_scale'])
423
+ # yolo_layer.class_scale = float(block['class_scale'])
424
+ # yolo_layer.coord_scale = float(block['coord_scale'])
425
+ out_filters.append(prev_filters)
426
+ out_strides.append(prev_stride)
427
+ models.append(yolo_layer)
428
+ else:
429
+ print('unknown type %s' % (block['type']))
430
+
431
+ return models
432
+
433
+ def load_weights(self, weightfile):
434
+ fp = open(weightfile, 'rb')
435
+ header = np.fromfile(fp, count=5, dtype=np.int32)
436
+ self.header = torch.from_numpy(header)
437
+ self.seen = self.header[3]
438
+ buf = np.fromfile(fp, dtype=np.float32)
439
+ fp.close()
440
+
441
+ start = 0
442
+ ind = -2
443
+ for block in self.blocks:
444
+ if start >= buf.size:
445
+ break
446
+ ind = ind + 1
447
+ if block['type'] == 'net':
448
+ continue
449
+ elif block['type'] == 'convolutional':
450
+ model = self.models[ind]
451
+ batch_normalize = int(block['batch_normalize'])
452
+ if batch_normalize:
453
+ start = load_conv_bn(buf, start, model[0], model[1])
454
+ else:
455
+ start = load_conv(buf, start, model[0])
456
+ elif block['type'] == 'connected':
457
+ model = self.models[ind]
458
+ if block['activation'] != 'linear':
459
+ start = load_fc(buf, start, model[0])
460
+ else:
461
+ start = load_fc(buf, start, model)
462
+ elif block['type'] == 'maxpool':
463
+ pass
464
+ elif block['type'] == 'reorg':
465
+ pass
466
+ elif block['type'] == 'upsample':
467
+ pass
468
+ elif block['type'] == 'route':
469
+ pass
470
+ elif block['type'] == 'shortcut':
471
+ pass
472
+ elif block['type'] == 'sam':
473
+ pass
474
+ elif block['type'] == 'region':
475
+ pass
476
+ elif block['type'] == 'yolo':
477
+ pass
478
+ elif block['type'] == 'avgpool':
479
+ pass
480
+ elif block['type'] == 'softmax':
481
+ pass
482
+ elif block['type'] == 'cost':
483
+ pass
484
+ else:
485
+ print('unknown type %s' % (block['type']))
486
+
487
+ # def save_weights(self, outfile, cutoff=0):
488
+ # if cutoff <= 0:
489
+ # cutoff = len(self.blocks) - 1
490
+ #
491
+ # fp = open(outfile, 'wb')
492
+ # self.header[3] = self.seen
493
+ # header = self.header
494
+ # header.numpy().tofile(fp)
495
+ #
496
+ # ind = -1
497
+ # for blockId in range(1, cutoff + 1):
498
+ # ind = ind + 1
499
+ # block = self.blocks[blockId]
500
+ # if block['type'] == 'convolutional':
501
+ # model = self.models[ind]
502
+ # batch_normalize = int(block['batch_normalize'])
503
+ # if batch_normalize:
504
+ # save_conv_bn(fp, model[0], model[1])
505
+ # else:
506
+ # save_conv(fp, model[0])
507
+ # elif block['type'] == 'connected':
508
+ # model = self.models[ind]
509
+ # if block['activation'] != 'linear':
510
+ # save_fc(fc, model)
511
+ # else:
512
+ # save_fc(fc, model[0])
513
+ # elif block['type'] == 'maxpool':
514
+ # pass
515
+ # elif block['type'] == 'reorg':
516
+ # pass
517
+ # elif block['type'] == 'upsample':
518
+ # pass
519
+ # elif block['type'] == 'route':
520
+ # pass
521
+ # elif block['type'] == 'shortcut':
522
+ # pass
523
+ # elif block['type'] == 'sam':
524
+ # pass
525
+ # elif block['type'] == 'region':
526
+ # pass
527
+ # elif block['type'] == 'yolo':
528
+ # pass
529
+ # elif block['type'] == 'avgpool':
530
+ # pass
531
+ # elif block['type'] == 'softmax':
532
+ # pass
533
+ # elif block['type'] == 'cost':
534
+ # pass
535
+ # else:
536
+ # print('unknown type %s' % (block['type']))
537
+ # fp.close()
yolov4/demo.py ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Demo script for YOLOv4, YOLOv4-tiny and YOLOv4x-mish ONNX models using OpenCV DNN.
3
+
4
+ All models share the same output format:
5
+ - boxes: [batch, N, 1, 4] -> [x1, y1, x2, y2] normalized to [0, 1]
6
+ - confs: [batch, N, num_classes]
7
+
8
+ Default input sizes (auto-detected from filename):
9
+ - yolov4.onnx : 608x608
10
+ - yolov4-tiny.onnx : 416x416
11
+ - yolov4x-mish.onnx : 640x640
12
+
13
+ Usage:
14
+ python demo.py --model yolov4.onnx --image example_outputs/input.jpg
15
+ python demo.py --model yolov4-tiny.onnx --image example_outputs/input.jpg
16
+ python demo.py --model yolov4x-mish.onnx --image example_outputs/input.jpg
17
+ """
18
+
19
+ import argparse
20
+ import cv2
21
+ import numpy as np
22
+
23
+
24
+ # COCO class names (80 classes) - both yolov4 and yolov4x-mish are trained on COCO
25
+ COCO_CLASSES = [
26
+ "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck",
27
+ "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench",
28
+ "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra",
29
+ "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee",
30
+ "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove",
31
+ "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup",
32
+ "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange",
33
+ "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch",
34
+ "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse",
35
+ "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink",
36
+ "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier",
37
+ "toothbrush",
38
+ ]
39
+
40
+
41
+ def get_model_input_size(net, default=416):
42
+ """Read the model's expected input size by inspecting layer shapes."""
43
+ # Use a probe forward to deduce input size; fall back to default if unavailable.
44
+ try:
45
+ in_shape = net.getLayer(0).blobs
46
+ if in_shape:
47
+ shape = in_shape[0].shape
48
+ return shape[3], shape[2] # (W, H)
49
+ except Exception:
50
+ pass
51
+ return default, default
52
+
53
+
54
+ def postprocess(outputs, conf_threshold, nms_threshold):
55
+ """Parse [boxes, confs] outputs and run NMS."""
56
+ # outs[0]: boxes [1, N, 1, 4] -> [x1, y1, x2, y2]
57
+ # outs[1]: confs [1, N, num_classes]
58
+ boxes_raw = outputs[0].reshape(-1, 4)
59
+ confs_raw = outputs[1].reshape(boxes_raw.shape[0], -1)
60
+
61
+ class_ids = []
62
+ confidences = []
63
+ boxes_xywh = [] # for cv2.dnn.NMSBoxes
64
+
65
+ for j in range(boxes_raw.shape[0]):
66
+ cls_id = int(np.argmax(confs_raw[j]))
67
+ score = float(confs_raw[j][cls_id])
68
+ if score >= conf_threshold:
69
+ x1, y1, x2, y2 = boxes_raw[j]
70
+ class_ids.append(cls_id)
71
+ confidences.append(score)
72
+ boxes_xywh.append([float(x1), float(y1), float(x2 - x1), float(y2 - y1)])
73
+
74
+ if not boxes_xywh:
75
+ return []
76
+
77
+ indices = cv2.dnn.NMSBoxes(boxes_xywh, confidences, conf_threshold, nms_threshold)
78
+ if len(indices) == 0:
79
+ return []
80
+ indices = np.array(indices).flatten()
81
+
82
+ detections = []
83
+ for i in indices:
84
+ x, y, w, h = boxes_xywh[i]
85
+ detections.append((class_ids[i], confidences[i], [x, y, x + w, y + h]))
86
+ return detections
87
+
88
+
89
+ def draw_detections(image, detections, output_path):
90
+ """Draw bounding boxes and labels on the image."""
91
+ out = image.copy()
92
+ h, w = out.shape[:2]
93
+ for cls_id, score, (x1, y1, x2, y2) in detections:
94
+ px1, py1 = int(x1 * w), int(y1 * h)
95
+ px2, py2 = int(x2 * w), int(y2 * h)
96
+ label = f"{COCO_CLASSES[cls_id]} {score:.2f}"
97
+ cv2.rectangle(out, (px1, py1), (px2, py2), (0, 0, 255), 2)
98
+ (tw, th), _ = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.6, 2)
99
+ cv2.rectangle(out, (px1, py1 - th - 6), (px1 + tw + 4, py1), (0, 0, 255), -1)
100
+ cv2.putText(out, label, (px1 + 2, py1 - 4), cv2.FONT_HERSHEY_SIMPLEX,
101
+ 0.6, (255, 255, 255), 2, cv2.LINE_AA)
102
+ cv2.imwrite(output_path, out)
103
+ print(f"Saved annotated image: {output_path}")
104
+
105
+
106
+ def main():
107
+ parser = argparse.ArgumentParser(description="YOLOv4 / YOLOv4x-mish ONNX demo (OpenCV DNN)")
108
+ parser.add_argument("--model", required=True, help="Path to ONNX model")
109
+ parser.add_argument("--image", default="example_outputs/input.jpg", help="Path to input image")
110
+ parser.add_argument("--output", default="output.jpg", help="Path for annotated output")
111
+ parser.add_argument("--input-size", type=int, default=0,
112
+ help="Model input size (W=H). Default: auto-detect from filename or 416")
113
+ parser.add_argument("--conf", type=float, default=0.4, help="Confidence threshold")
114
+ parser.add_argument("--nms", type=float, default=0.5, help="NMS IoU threshold")
115
+ args = parser.parse_args()
116
+
117
+ # Determine input size: from arg, or by model name
118
+ if args.input_size > 0:
119
+ input_size = args.input_size
120
+ elif "mish" in args.model.lower():
121
+ input_size = 640
122
+ elif "tiny" in args.model.lower():
123
+ input_size = 416
124
+ else:
125
+ input_size = 608
126
+ print(f"Using input size: {input_size}x{input_size}")
127
+
128
+ # Load image
129
+ img = cv2.imread(args.image)
130
+ if img is None:
131
+ raise FileNotFoundError(f"Cannot read image: {args.image}")
132
+ print(f"Input image: {args.image} ({img.shape[1]}x{img.shape[0]})")
133
+
134
+ # Build blob: 1/255 scale, swap BGR -> RGB, no crop
135
+ blob = cv2.dnn.blobFromImage(img, 1.0 / 255.0,
136
+ (input_size, input_size),
137
+ swapRB=True, crop=False)
138
+ print(f"Blob shape: {blob.shape}")
139
+
140
+ # Load network with OpenCV DNN
141
+ net = cv2.dnn.readNetFromONNX(args.model)
142
+ net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
143
+ net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
144
+
145
+ # Inference
146
+ net.setInput(blob)
147
+ out_names = net.getUnconnectedOutLayersNames()
148
+ print(f"Output layer names: {out_names}")
149
+ outputs = net.forward(out_names)
150
+ for name, o in zip(out_names, outputs):
151
+ print(f"Output '{name}' shape: {o.shape}")
152
+
153
+ # Postprocess
154
+ detections = postprocess(outputs, args.conf, args.nms)
155
+ print(f"\nDetections (conf >= {args.conf}, after NMS): {len(detections)}")
156
+ for cls_id, score, (x1, y1, x2, y2) in detections:
157
+ print(f" {COCO_CLASSES[cls_id]:15s} score={score:.4f} "
158
+ f"bbox=[{x1:.4f}, {y1:.4f}, {x2:.4f}, {y2:.4f}]")
159
+
160
+ draw_detections(img, detections, args.output)
161
+
162
+
163
+ if __name__ == "__main__":
164
+ main()
yolov4/example_outputs/input.jpg ADDED

Git LFS Details

  • SHA256: 5a9522051c3cec2bbd2f6323fccba32e8fbf3ddcc2b3e2fd46b04c720bc6f866
  • Pointer size: 131 Bytes
  • Size of remote file: 164 kB
yolov4/example_outputs/yolov4-tiny_output.jpg ADDED

Git LFS Details

  • SHA256: 8d81fef11da638a351baf3ffbb0730e32cf28f8d991988b9cafd899ffa93728c
  • Pointer size: 131 Bytes
  • Size of remote file: 189 kB
yolov4/example_outputs/yolov4_output.jpg ADDED

Git LFS Details

  • SHA256: 62805361b978882921097060b3d5d74ee99cd2f485a894f4a51a6ecfb5ad3fdd
  • Pointer size: 131 Bytes
  • Size of remote file: 187 kB
yolov4/example_outputs/yolov4x-mish_output.jpg ADDED

Git LFS Details

  • SHA256: 67f472623662275af1f36eda04bf8bb22a362624c9e380d126a6643211e328b8
  • Pointer size: 131 Bytes
  • Size of remote file: 186 kB
yolov4/yolo_layer.py ADDED
@@ -0,0 +1,327 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch.nn as nn
2
+ import torch.nn.functional as F
3
+ from tool.torch_utils import *
4
+
5
+ def yolo_forward(output, conf_thresh, num_classes, anchors, num_anchors, scale_x_y, only_objectness=1,
6
+ validation=False):
7
+ # Output would be invalid if it does not satisfy this assert
8
+ # assert (output.size(1) == (5 + num_classes) * num_anchors)
9
+
10
+ # print(output.size())
11
+
12
+ # Slice the second dimension (channel) of output into:
13
+ # [ 2, 2, 1, num_classes, 2, 2, 1, num_classes, 2, 2, 1, num_classes ]
14
+ # And then into
15
+ # bxy = [ 6 ] bwh = [ 6 ] det_conf = [ 3 ] cls_conf = [ num_classes * 3 ]
16
+ batch = output.size(0)
17
+ H = output.size(2)
18
+ W = output.size(3)
19
+
20
+ bxy_list = []
21
+ bwh_list = []
22
+ det_confs_list = []
23
+ cls_confs_list = []
24
+
25
+ for i in range(num_anchors):
26
+ begin = i * (5 + num_classes)
27
+ end = (i + 1) * (5 + num_classes)
28
+
29
+ bxy_list.append(output[:, begin : begin + 2])
30
+ bwh_list.append(output[:, begin + 2 : begin + 4])
31
+ det_confs_list.append(output[:, begin + 4 : begin + 5])
32
+ cls_confs_list.append(output[:, begin + 5 : end])
33
+
34
+ # Shape: [batch, num_anchors * 2, H, W]
35
+ bxy = torch.cat(bxy_list, dim=1)
36
+ # Shape: [batch, num_anchors * 2, H, W]
37
+ bwh = torch.cat(bwh_list, dim=1)
38
+
39
+ # Shape: [batch, num_anchors, H, W]
40
+ det_confs = torch.cat(det_confs_list, dim=1)
41
+ # Shape: [batch, num_anchors * H * W]
42
+ det_confs = det_confs.view(batch, num_anchors * H * W)
43
+
44
+ # Shape: [batch, num_anchors * num_classes, H, W]
45
+ cls_confs = torch.cat(cls_confs_list, dim=1)
46
+ # Shape: [batch, num_anchors, num_classes, H * W]
47
+ cls_confs = cls_confs.view(batch, num_anchors, num_classes, H * W)
48
+ # Shape: [batch, num_anchors, num_classes, H * W] --> [batch, num_anchors * H * W, num_classes]
49
+ cls_confs = cls_confs.permute(0, 1, 3, 2).reshape(batch, num_anchors * H * W, num_classes)
50
+
51
+ # Apply sigmoid(), exp() and softmax() to slices
52
+ #
53
+ bxy = torch.sigmoid(bxy) * scale_x_y - 0.5 * (scale_x_y - 1)
54
+ bwh = torch.exp(bwh)
55
+ det_confs = torch.sigmoid(det_confs)
56
+ cls_confs = torch.sigmoid(cls_confs)
57
+
58
+ # Prepare C-x, C-y, P-w, P-h (None of them are torch related)
59
+ grid_x = np.expand_dims(np.expand_dims(np.expand_dims(np.linspace(0, W - 1, W), axis=0).repeat(H, 0), axis=0), axis=0)
60
+ grid_y = np.expand_dims(np.expand_dims(np.expand_dims(np.linspace(0, H - 1, H), axis=1).repeat(W, 1), axis=0), axis=0)
61
+ # grid_x = torch.linspace(0, W - 1, W).reshape(1, 1, 1, W).repeat(1, 1, H, 1)
62
+ # grid_y = torch.linspace(0, H - 1, H).reshape(1, 1, H, 1).repeat(1, 1, 1, W)
63
+
64
+ anchor_w = []
65
+ anchor_h = []
66
+ for i in range(num_anchors):
67
+ anchor_w.append(anchors[i * 2])
68
+ anchor_h.append(anchors[i * 2 + 1])
69
+
70
+ device = None
71
+ cuda_check = output.is_cuda
72
+ if cuda_check:
73
+ device = output.get_device()
74
+
75
+ bx_list = []
76
+ by_list = []
77
+ bw_list = []
78
+ bh_list = []
79
+
80
+ # Apply C-x, C-y, P-w, P-h
81
+ for i in range(num_anchors):
82
+ ii = i * 2
83
+ # Shape: [batch, 1, H, W]
84
+ bx = bxy[:, ii : ii + 1] + torch.tensor(grid_x, device=device, dtype=torch.float32) # grid_x.to(device=device, dtype=torch.float32)
85
+ # Shape: [batch, 1, H, W]
86
+ by = bxy[:, ii + 1 : ii + 2] + torch.tensor(grid_y, device=device, dtype=torch.float32) # grid_y.to(device=device, dtype=torch.float32)
87
+ # Shape: [batch, 1, H, W]
88
+ bw = bwh[:, ii : ii + 1] * anchor_w[i]
89
+ # Shape: [batch, 1, H, W]
90
+ bh = bwh[:, ii + 1 : ii + 2] * anchor_h[i]
91
+
92
+ bx_list.append(bx)
93
+ by_list.append(by)
94
+ bw_list.append(bw)
95
+ bh_list.append(bh)
96
+
97
+
98
+ ########################################
99
+ # Figure out bboxes from slices #
100
+ ########################################
101
+
102
+ # Shape: [batch, num_anchors, H, W]
103
+ bx = torch.cat(bx_list, dim=1)
104
+ # Shape: [batch, num_anchors, H, W]
105
+ by = torch.cat(by_list, dim=1)
106
+ # Shape: [batch, num_anchors, H, W]
107
+ bw = torch.cat(bw_list, dim=1)
108
+ # Shape: [batch, num_anchors, H, W]
109
+ bh = torch.cat(bh_list, dim=1)
110
+
111
+ # Shape: [batch, 2 * num_anchors, H, W]
112
+ bx_bw = torch.cat((bx, bw), dim=1)
113
+ # Shape: [batch, 2 * num_anchors, H, W]
114
+ by_bh = torch.cat((by, bh), dim=1)
115
+
116
+ # normalize coordinates to [0, 1]
117
+ bx_bw /= W
118
+ by_bh /= H
119
+
120
+ # Shape: [batch, num_anchors * H * W, 1]
121
+ bx = bx_bw[:, :num_anchors].view(batch, num_anchors * H * W, 1)
122
+ by = by_bh[:, :num_anchors].view(batch, num_anchors * H * W, 1)
123
+ bw = bx_bw[:, num_anchors:].view(batch, num_anchors * H * W, 1)
124
+ bh = by_bh[:, num_anchors:].view(batch, num_anchors * H * W, 1)
125
+
126
+ bx1 = bx - bw * 0.5
127
+ by1 = by - bh * 0.5
128
+ bx2 = bx1 + bw
129
+ by2 = by1 + bh
130
+
131
+ # Shape: [batch, num_anchors * h * w, 4] -> [batch, num_anchors * h * w, 1, 4]
132
+ boxes = torch.cat((bx1, by1, bx2, by2), dim=2).view(batch, num_anchors * H * W, 1, 4)
133
+ # boxes = boxes.repeat(1, 1, num_classes, 1)
134
+
135
+ # boxes: [batch, num_anchors * H * W, 1, 4]
136
+ # cls_confs: [batch, num_anchors * H * W, num_classes]
137
+ # det_confs: [batch, num_anchors * H * W]
138
+
139
+ det_confs = det_confs.view(batch, num_anchors * H * W, 1)
140
+ confs = cls_confs * det_confs
141
+
142
+ # boxes: [batch, num_anchors * H * W, 1, 4]
143
+ # confs: [batch, num_anchors * H * W, num_classes]
144
+
145
+ return boxes, confs
146
+
147
+
148
+ def yolo_forward_dynamic(output, conf_thresh, num_classes, anchors, num_anchors, scale_x_y, only_objectness=1,
149
+ validation=False, new_coords=0):
150
+ # Output would be invalid if it does not satisfy this assert
151
+ # assert (output.size(1) == (5 + num_classes) * num_anchors)
152
+
153
+ # print(output.size())
154
+
155
+ # Slice the second dimension (channel) of output into:
156
+ # [ 2, 2, 1, num_classes, 2, 2, 1, num_classes, 2, 2, 1, num_classes ]
157
+ # And then into
158
+ # bxy = [ 6 ] bwh = [ 6 ] det_conf = [ 3 ] cls_conf = [ num_classes * 3 ]
159
+ # batch = output.size(0)
160
+ # H = output.size(2)
161
+ # W = output.size(3)
162
+
163
+ bxy_list = []
164
+ bwh_list = []
165
+ det_confs_list = []
166
+ cls_confs_list = []
167
+
168
+ for i in range(num_anchors):
169
+ begin = i * (5 + num_classes)
170
+ end = (i + 1) * (5 + num_classes)
171
+
172
+ bxy_list.append(output[:, begin : begin + 2])
173
+ bwh_list.append(output[:, begin + 2 : begin + 4])
174
+ det_confs_list.append(output[:, begin + 4 : begin + 5])
175
+ cls_confs_list.append(output[:, begin + 5 : end])
176
+
177
+ # Shape: [batch, num_anchors * 2, H, W]
178
+ bxy = torch.cat(bxy_list, dim=1)
179
+ # Shape: [batch, num_anchors * 2, H, W]
180
+ bwh = torch.cat(bwh_list, dim=1)
181
+
182
+ # Shape: [batch, num_anchors, H, W]
183
+ det_confs = torch.cat(det_confs_list, dim=1)
184
+ # Shape: [batch, num_anchors * H * W]
185
+ det_confs = det_confs.view(output.size(0), num_anchors * output.size(2) * output.size(3))
186
+
187
+ # Shape: [batch, num_anchors * num_classes, H, W]
188
+ cls_confs = torch.cat(cls_confs_list, dim=1)
189
+ # Shape: [batch, num_anchors, num_classes, H * W]
190
+ cls_confs = cls_confs.view(output.size(0), num_anchors, num_classes, output.size(2) * output.size(3))
191
+ # Shape: [batch, num_anchors, num_classes, H * W] --> [batch, num_anchors * H * W, num_classes]
192
+ cls_confs = cls_confs.permute(0, 1, 3, 2).reshape(output.size(0), num_anchors * output.size(2) * output.size(3), num_classes)
193
+
194
+ # Apply activations based on new_coords flag
195
+ if new_coords:
196
+ # new_coords=1: no sigmoid on xy/conf/cls, squared width/height instead of exp
197
+ bxy = bxy * scale_x_y - 0.5 * (scale_x_y - 1)
198
+ bwh = (bwh * 2) ** 2
199
+ # det_confs and cls_confs are used as-is (no sigmoid)
200
+ else:
201
+ # Standard YOLO: sigmoid on xy/conf/cls, exp on wh
202
+ bxy = torch.sigmoid(bxy) * scale_x_y - 0.5 * (scale_x_y - 1)
203
+ bwh = torch.exp(bwh)
204
+ det_confs = torch.sigmoid(det_confs)
205
+ cls_confs = torch.sigmoid(cls_confs)
206
+
207
+ # Prepare C-x, C-y, P-w, P-h (None of them are torch related)
208
+ grid_x = np.expand_dims(np.expand_dims(np.expand_dims(np.linspace(0, output.size(3) - 1, output.size(3)), axis=0).repeat(output.size(2), 0), axis=0), axis=0)
209
+ grid_y = np.expand_dims(np.expand_dims(np.expand_dims(np.linspace(0, output.size(2) - 1, output.size(2)), axis=1).repeat(output.size(3), 1), axis=0), axis=0)
210
+
211
+ anchor_w = []
212
+ anchor_h = []
213
+ for i in range(num_anchors):
214
+ anchor_w.append(anchors[i * 2])
215
+ anchor_h.append(anchors[i * 2 + 1])
216
+
217
+ device = None
218
+ cuda_check = output.is_cuda
219
+ if cuda_check:
220
+ device = output.get_device()
221
+
222
+ bx_list = []
223
+ by_list = []
224
+ bw_list = []
225
+ bh_list = []
226
+
227
+ # Apply C-x, C-y, P-w, P-h
228
+ for i in range(num_anchors):
229
+ ii = i * 2
230
+ # Shape: [batch, 1, H, W]
231
+ bx = bxy[:, ii : ii + 1] + torch.tensor(grid_x, device=device, dtype=torch.float32)
232
+ # Shape: [batch, 1, H, W]
233
+ by = bxy[:, ii + 1 : ii + 2] + torch.tensor(grid_y, device=device, dtype=torch.float32)
234
+ # Shape: [batch, 1, H, W]
235
+ bw = bwh[:, ii : ii + 1] * anchor_w[i]
236
+ # Shape: [batch, 1, H, W]
237
+ bh = bwh[:, ii + 1 : ii + 2] * anchor_h[i]
238
+
239
+ bx_list.append(bx)
240
+ by_list.append(by)
241
+ bw_list.append(bw)
242
+ bh_list.append(bh)
243
+
244
+
245
+ ########################################
246
+ # Figure out bboxes from slices #
247
+ ########################################
248
+
249
+ # Shape: [batch, num_anchors, H, W]
250
+ bx = torch.cat(bx_list, dim=1)
251
+ # Shape: [batch, num_anchors, H, W]
252
+ by = torch.cat(by_list, dim=1)
253
+ # Shape: [batch, num_anchors, H, W]
254
+ bw = torch.cat(bw_list, dim=1)
255
+ # Shape: [batch, num_anchors, H, W]
256
+ bh = torch.cat(bh_list, dim=1)
257
+
258
+ # Shape: [batch, 2 * num_anchors, H, W]
259
+ bx_bw = torch.cat((bx, bw), dim=1)
260
+ # Shape: [batch, 2 * num_anchors, H, W]
261
+ by_bh = torch.cat((by, bh), dim=1)
262
+
263
+ # normalize coordinates to [0, 1]
264
+ bx_bw /= output.size(3)
265
+ by_bh /= output.size(2)
266
+
267
+ # Shape: [batch, num_anchors * H * W, 1]
268
+ bx = bx_bw[:, :num_anchors].view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)
269
+ by = by_bh[:, :num_anchors].view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)
270
+ bw = bx_bw[:, num_anchors:].view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)
271
+ bh = by_bh[:, num_anchors:].view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)
272
+
273
+ bx1 = bx - bw * 0.5
274
+ by1 = by - bh * 0.5
275
+ bx2 = bx1 + bw
276
+ by2 = by1 + bh
277
+
278
+ # Shape: [batch, num_anchors * h * w, 4] -> [batch, num_anchors * h * w, 1, 4]
279
+ boxes = torch.cat((bx1, by1, bx2, by2), dim=2).view(output.size(0), num_anchors * output.size(2) * output.size(3), 1, 4)
280
+ # boxes = boxes.repeat(1, 1, num_classes, 1)
281
+
282
+ # boxes: [batch, num_anchors * H * W, 1, 4]
283
+ # cls_confs: [batch, num_anchors * H * W, num_classes]
284
+ # det_confs: [batch, num_anchors * H * W]
285
+
286
+ det_confs = det_confs.view(output.size(0), num_anchors * output.size(2) * output.size(3), 1)
287
+ confs = cls_confs * det_confs
288
+
289
+ # boxes: [batch, num_anchors * H * W, 1, 4]
290
+ # confs: [batch, num_anchors * H * W, num_classes]
291
+
292
+ return boxes, confs
293
+
294
+ class YoloLayer(nn.Module):
295
+ ''' Yolo layer
296
+ model_out: while inference,is post-processing inside or outside the model
297
+ true:outside
298
+ '''
299
+ def __init__(self, anchor_mask=[], num_classes=0, anchors=[], num_anchors=1, stride=32, model_out=False):
300
+ super(YoloLayer, self).__init__()
301
+ self.anchor_mask = anchor_mask
302
+ self.num_classes = num_classes
303
+ self.anchors = anchors
304
+ self.num_anchors = num_anchors
305
+ self.anchor_step = len(anchors) // num_anchors
306
+ self.coord_scale = 1
307
+ self.noobject_scale = 1
308
+ self.object_scale = 5
309
+ self.class_scale = 1
310
+ self.thresh = 0.6
311
+ self.stride = stride
312
+ self.seen = 0
313
+ self.scale_x_y = 1
314
+ self.new_coords = 0
315
+
316
+ self.model_out = model_out
317
+
318
+ def forward(self, output, target=None):
319
+ if self.training:
320
+ return output
321
+ masked_anchors = []
322
+ for m in self.anchor_mask:
323
+ masked_anchors += self.anchors[m * self.anchor_step:(m + 1) * self.anchor_step]
324
+ masked_anchors = [anchor / self.stride for anchor in masked_anchors]
325
+
326
+ return yolo_forward_dynamic(output, self.thresh, self.num_classes, masked_anchors, len(self.anchor_mask),scale_x_y=self.scale_x_y, new_coords=self.new_coords)
327
+
yolov4/yolov4-tiny.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24c558aaeca05d96337f48b94ac0d8d61b94f7a2591eab9a2bab87cabfd71e07
3
+ size 24316593
yolov4/yolov4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb56ae63aad4ba2c320cb9275f62a9a864f8cae680135821c1810fa560a00512
3
+ size 257676998
yolov4/yolov4x-mish.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d14a605672e45d2c36618705c7e27124ed4e4189ad8db9ec0bd71d3ab0a860e
3
+ size 399210397