ruudra commited on
Commit
5bacc5b
·
1 Parent(s): e9e4592

Upload 93 files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. CONTRIBUTING.md +98 -0
  2. LICENSE +674 -0
  3. README.md +294 -12
  4. data/Argoverse.yaml +67 -0
  5. data/GlobalWheat2020.yaml +54 -0
  6. data/Objects365.yaml +114 -0
  7. data/SKU-110K.yaml +53 -0
  8. data/VOC.yaml +81 -0
  9. data/VisDrone.yaml +61 -0
  10. data/coco.yaml +45 -0
  11. data/coco128.yaml +30 -0
  12. data/hyps/hyp.Objects365.yaml +34 -0
  13. data/hyps/hyp.VOC.yaml +40 -0
  14. data/hyps/hyp.scratch-high.yaml +34 -0
  15. data/hyps/hyp.scratch-low.yaml +34 -0
  16. data/hyps/hyp.scratch-med.yaml +34 -0
  17. data/images/bus.jpg +0 -0
  18. data/images/zidane.jpg +0 -0
  19. data/scripts/download_weights.sh +20 -0
  20. data/scripts/get_coco.sh +27 -0
  21. data/scripts/get_coco128.sh +17 -0
  22. data/xView.yaml +102 -0
  23. detect.py +256 -0
  24. export.py +610 -0
  25. hubconf.py +145 -0
  26. models/__init__.py +0 -0
  27. models/common.py +748 -0
  28. models/experimental.py +104 -0
  29. models/hub/anchors.yaml +59 -0
  30. models/hub/yolov3-spp.yaml +51 -0
  31. models/hub/yolov3-tiny.yaml +41 -0
  32. models/hub/yolov3.yaml +51 -0
  33. models/hub/yolov5-bifpn.yaml +48 -0
  34. models/hub/yolov5-fpn.yaml +42 -0
  35. models/hub/yolov5-p2.yaml +54 -0
  36. models/hub/yolov5-p34.yaml +41 -0
  37. models/hub/yolov5-p6.yaml +56 -0
  38. models/hub/yolov5-p7.yaml +67 -0
  39. models/hub/yolov5-panet.yaml +48 -0
  40. models/hub/yolov5l6.yaml +60 -0
  41. models/hub/yolov5m6.yaml +60 -0
  42. models/hub/yolov5n6.yaml +60 -0
  43. models/hub/yolov5s-ghost.yaml +48 -0
  44. models/hub/yolov5s-transformer.yaml +48 -0
  45. models/hub/yolov5s6.yaml +60 -0
  46. models/hub/yolov5x6.yaml +60 -0
  47. models/tf.py +574 -0
  48. models/yolo.py +338 -0
  49. models/yolov5l.yaml +48 -0
  50. models/yolov5m.yaml +48 -0
CONTRIBUTING.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Contributing to YOLOv5 🚀
2
+
3
+ We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible, whether it's:
4
+
5
+ - Reporting a bug
6
+ - Discussing the current state of the code
7
+ - Submitting a fix
8
+ - Proposing a new feature
9
+ - Becoming a maintainer
10
+
11
+ YOLOv5 works so well due to our combined community effort, and for every small improvement you contribute you will be
12
+ helping push the frontiers of what's possible in AI 😃!
13
+
14
+ ## Submitting a Pull Request (PR) 🛠️
15
+
16
+ Submitting a PR is easy! This example shows how to submit a PR for updating `requirements.txt` in 4 steps:
17
+
18
+ ### 1. Select File to Update
19
+
20
+ Select `requirements.txt` to update by clicking on it in GitHub.
21
+
22
+ <p align="center"><img width="800" alt="PR_step1" src="https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png"></p>
23
+
24
+ ### 2. Click 'Edit this file'
25
+
26
+ Button is in top-right corner.
27
+
28
+ <p align="center"><img width="800" alt="PR_step2" src="https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png"></p>
29
+
30
+ ### 3. Make Changes
31
+
32
+ Change `matplotlib` version from `3.2.2` to `3.3`.
33
+
34
+ <p align="center"><img width="800" alt="PR_step3" src="https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png"></p>
35
+
36
+ ### 4. Preview Changes and Submit PR
37
+
38
+ Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch**
39
+ for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose
40
+ changes** button. All done, your PR is now submitted to YOLOv5 for review and approval 😃!
41
+
42
+ <p align="center"><img width="800" alt="PR_step4" src="https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png"></p>
43
+
44
+ ### PR recommendations
45
+
46
+ To allow your work to be integrated as seamlessly as possible, we advise you to:
47
+
48
+ - ✅ Verify your PR is **up-to-date with upstream/master.** If your PR is behind upstream/master an
49
+ automatic [GitHub Actions](https://github.com/ultralytics/yolov5/blob/master/.github/workflows/rebase.yml) merge may
50
+ be attempted by writing /rebase in a new comment, or by running the following code, replacing 'feature' with the name
51
+ of your local branch:
52
+
53
+ ```bash
54
+ git remote add upstream https://github.com/ultralytics/yolov5.git
55
+ git fetch upstream
56
+ # git checkout feature # <--- replace 'feature' with local branch name
57
+ git merge upstream/master
58
+ git push -u origin -f
59
+ ```
60
+
61
+ - ✅ Verify all Continuous Integration (CI) **checks are passing**.
62
+ - ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase
63
+ but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee
64
+
65
+ ## Submitting a Bug Report 🐛
66
+
67
+ If you spot a problem with YOLOv5 please submit a Bug Report!
68
+
69
+ For us to start investigating a possible problem we need to be able to reproduce it ourselves first. We've created a few
70
+ short guidelines below to help users provide what we need in order to get started.
71
+
72
+ When asking a question, people will be better able to provide help if you provide **code** that they can easily
73
+ understand and use to **reproduce** the problem. This is referred to by community members as creating
74
+ a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces
75
+ the problem should be:
76
+
77
+ - ✅ **Minimal** – Use as little code as possible that still produces the same problem
78
+ - ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself
79
+ - ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem
80
+
81
+ In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code
82
+ should be:
83
+
84
+ - ✅ **Current** – Verify that your code is up-to-date with current
85
+ GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new
86
+ copy to ensure your problem has not already been resolved by previous commits.
87
+ - ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this
88
+ repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️.
89
+
90
+ If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛
91
+ **Bug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and providing
92
+ a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better
93
+ understand and diagnose your problem.
94
+
95
+ ## License
96
+
97
+ By contributing, you agree that your contributions will be licensed under
98
+ the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/)
LICENSE ADDED
@@ -0,0 +1,674 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GNU GENERAL PUBLIC LICENSE
2
+ Version 3, 29 June 2007
3
+
4
+ Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
5
+ Everyone is permitted to copy and distribute verbatim copies
6
+ of this license document, but changing it is not allowed.
7
+
8
+ Preamble
9
+
10
+ The GNU General Public License is a free, copyleft license for
11
+ software and other kinds of works.
12
+
13
+ The licenses for most software and other practical works are designed
14
+ to take away your freedom to share and change the works. By contrast,
15
+ the GNU General Public License is intended to guarantee your freedom to
16
+ share and change all versions of a program--to make sure it remains free
17
+ software for all its users. We, the Free Software Foundation, use the
18
+ GNU General Public License for most of our software; it applies also to
19
+ any other work released this way by its authors. You can apply it to
20
+ your programs, too.
21
+
22
+ When we speak of free software, we are referring to freedom, not
23
+ price. Our General Public Licenses are designed to make sure that you
24
+ have the freedom to distribute copies of free software (and charge for
25
+ them if you wish), that you receive source code or can get it if you
26
+ want it, that you can change the software or use pieces of it in new
27
+ free programs, and that you know you can do these things.
28
+
29
+ To protect your rights, we need to prevent others from denying you
30
+ these rights or asking you to surrender the rights. Therefore, you have
31
+ certain responsibilities if you distribute copies of the software, or if
32
+ you modify it: responsibilities to respect the freedom of others.
33
+
34
+ For example, if you distribute copies of such a program, whether
35
+ gratis or for a fee, you must pass on to the recipients the same
36
+ freedoms that you received. You must make sure that they, too, receive
37
+ or can get the source code. And you must show them these terms so they
38
+ know their rights.
39
+
40
+ Developers that use the GNU GPL protect your rights with two steps:
41
+ (1) assert copyright on the software, and (2) offer you this License
42
+ giving you legal permission to copy, distribute and/or modify it.
43
+
44
+ For the developers' and authors' protection, the GPL clearly explains
45
+ that there is no warranty for this free software. For both users' and
46
+ authors' sake, the GPL requires that modified versions be marked as
47
+ changed, so that their problems will not be attributed erroneously to
48
+ authors of previous versions.
49
+
50
+ Some devices are designed to deny users access to install or run
51
+ modified versions of the software inside them, although the manufacturer
52
+ can do so. This is fundamentally incompatible with the aim of
53
+ protecting users' freedom to change the software. The systematic
54
+ pattern of such abuse occurs in the area of products for individuals to
55
+ use, which is precisely where it is most unacceptable. Therefore, we
56
+ have designed this version of the GPL to prohibit the practice for those
57
+ products. If such problems arise substantially in other domains, we
58
+ stand ready to extend this provision to those domains in future versions
59
+ of the GPL, as needed to protect the freedom of users.
60
+
61
+ Finally, every program is threatened constantly by software patents.
62
+ States should not allow patents to restrict development and use of
63
+ software on general-purpose computers, but in those that do, we wish to
64
+ avoid the special danger that patents applied to a free program could
65
+ make it effectively proprietary. To prevent this, the GPL assures that
66
+ patents cannot be used to render the program non-free.
67
+
68
+ The precise terms and conditions for copying, distribution and
69
+ modification follow.
70
+
71
+ TERMS AND CONDITIONS
72
+
73
+ 0. Definitions.
74
+
75
+ "This License" refers to version 3 of the GNU General Public License.
76
+
77
+ "Copyright" also means copyright-like laws that apply to other kinds of
78
+ works, such as semiconductor masks.
79
+
80
+ "The Program" refers to any copyrightable work licensed under this
81
+ License. Each licensee is addressed as "you". "Licensees" and
82
+ "recipients" may be individuals or organizations.
83
+
84
+ To "modify" a work means to copy from or adapt all or part of the work
85
+ in a fashion requiring copyright permission, other than the making of an
86
+ exact copy. The resulting work is called a "modified version" of the
87
+ earlier work or a work "based on" the earlier work.
88
+
89
+ A "covered work" means either the unmodified Program or a work based
90
+ on the Program.
91
+
92
+ To "propagate" a work means to do anything with it that, without
93
+ permission, would make you directly or secondarily liable for
94
+ infringement under applicable copyright law, except executing it on a
95
+ computer or modifying a private copy. Propagation includes copying,
96
+ distribution (with or without modification), making available to the
97
+ public, and in some countries other activities as well.
98
+
99
+ To "convey" a work means any kind of propagation that enables other
100
+ parties to make or receive copies. Mere interaction with a user through
101
+ a computer network, with no transfer of a copy, is not conveying.
102
+
103
+ An interactive user interface displays "Appropriate Legal Notices"
104
+ to the extent that it includes a convenient and prominently visible
105
+ feature that (1) displays an appropriate copyright notice, and (2)
106
+ tells the user that there is no warranty for the work (except to the
107
+ extent that warranties are provided), that licensees may convey the
108
+ work under this License, and how to view a copy of this License. If
109
+ the interface presents a list of user commands or options, such as a
110
+ menu, a prominent item in the list meets this criterion.
111
+
112
+ 1. Source Code.
113
+
114
+ The "source code" for a work means the preferred form of the work
115
+ for making modifications to it. "Object code" means any non-source
116
+ form of a work.
117
+
118
+ A "Standard Interface" means an interface that either is an official
119
+ standard defined by a recognized standards body, or, in the case of
120
+ interfaces specified for a particular programming language, one that
121
+ is widely used among developers working in that language.
122
+
123
+ The "System Libraries" of an executable work include anything, other
124
+ than the work as a whole, that (a) is included in the normal form of
125
+ packaging a Major Component, but which is not part of that Major
126
+ Component, and (b) serves only to enable use of the work with that
127
+ Major Component, or to implement a Standard Interface for which an
128
+ implementation is available to the public in source code form. A
129
+ "Major Component", in this context, means a major essential component
130
+ (kernel, window system, and so on) of the specific operating system
131
+ (if any) on which the executable work runs, or a compiler used to
132
+ produce the work, or an object code interpreter used to run it.
133
+
134
+ The "Corresponding Source" for a work in object code form means all
135
+ the source code needed to generate, install, and (for an executable
136
+ work) run the object code and to modify the work, including scripts to
137
+ control those activities. However, it does not include the work's
138
+ System Libraries, or general-purpose tools or generally available free
139
+ programs which are used unmodified in performing those activities but
140
+ which are not part of the work. For example, Corresponding Source
141
+ includes interface definition files associated with source files for
142
+ the work, and the source code for shared libraries and dynamically
143
+ linked subprograms that the work is specifically designed to require,
144
+ such as by intimate data communication or control flow between those
145
+ subprograms and other parts of the work.
146
+
147
+ The Corresponding Source need not include anything that users
148
+ can regenerate automatically from other parts of the Corresponding
149
+ Source.
150
+
151
+ The Corresponding Source for a work in source code form is that
152
+ same work.
153
+
154
+ 2. Basic Permissions.
155
+
156
+ All rights granted under this License are granted for the term of
157
+ copyright on the Program, and are irrevocable provided the stated
158
+ conditions are met. This License explicitly affirms your unlimited
159
+ permission to run the unmodified Program. The output from running a
160
+ covered work is covered by this License only if the output, given its
161
+ content, constitutes a covered work. This License acknowledges your
162
+ rights of fair use or other equivalent, as provided by copyright law.
163
+
164
+ You may make, run and propagate covered works that you do not
165
+ convey, without conditions so long as your license otherwise remains
166
+ in force. You may convey covered works to others for the sole purpose
167
+ of having them make modifications exclusively for you, or provide you
168
+ with facilities for running those works, provided that you comply with
169
+ the terms of this License in conveying all material for which you do
170
+ not control copyright. Those thus making or running the covered works
171
+ for you must do so exclusively on your behalf, under your direction
172
+ and control, on terms that prohibit them from making any copies of
173
+ your copyrighted material outside their relationship with you.
174
+
175
+ Conveying under any other circumstances is permitted solely under
176
+ the conditions stated below. Sublicensing is not allowed; section 10
177
+ makes it unnecessary.
178
+
179
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
180
+
181
+ No covered work shall be deemed part of an effective technological
182
+ measure under any applicable law fulfilling obligations under article
183
+ 11 of the WIPO copyright treaty adopted on 20 December 1996, or
184
+ similar laws prohibiting or restricting circumvention of such
185
+ measures.
186
+
187
+ When you convey a covered work, you waive any legal power to forbid
188
+ circumvention of technological measures to the extent such circumvention
189
+ is effected by exercising rights under this License with respect to
190
+ the covered work, and you disclaim any intention to limit operation or
191
+ modification of the work as a means of enforcing, against the work's
192
+ users, your or third parties' legal rights to forbid circumvention of
193
+ technological measures.
194
+
195
+ 4. Conveying Verbatim Copies.
196
+
197
+ You may convey verbatim copies of the Program's source code as you
198
+ receive it, in any medium, provided that you conspicuously and
199
+ appropriately publish on each copy an appropriate copyright notice;
200
+ keep intact all notices stating that this License and any
201
+ non-permissive terms added in accord with section 7 apply to the code;
202
+ keep intact all notices of the absence of any warranty; and give all
203
+ recipients a copy of this License along with the Program.
204
+
205
+ You may charge any price or no price for each copy that you convey,
206
+ and you may offer support or warranty protection for a fee.
207
+
208
+ 5. Conveying Modified Source Versions.
209
+
210
+ You may convey a work based on the Program, or the modifications to
211
+ produce it from the Program, in the form of source code under the
212
+ terms of section 4, provided that you also meet all of these conditions:
213
+
214
+ a) The work must carry prominent notices stating that you modified
215
+ it, and giving a relevant date.
216
+
217
+ b) The work must carry prominent notices stating that it is
218
+ released under this License and any conditions added under section
219
+ 7. This requirement modifies the requirement in section 4 to
220
+ "keep intact all notices".
221
+
222
+ c) You must license the entire work, as a whole, under this
223
+ License to anyone who comes into possession of a copy. This
224
+ License will therefore apply, along with any applicable section 7
225
+ additional terms, to the whole of the work, and all its parts,
226
+ regardless of how they are packaged. This License gives no
227
+ permission to license the work in any other way, but it does not
228
+ invalidate such permission if you have separately received it.
229
+
230
+ d) If the work has interactive user interfaces, each must display
231
+ Appropriate Legal Notices; however, if the Program has interactive
232
+ interfaces that do not display Appropriate Legal Notices, your
233
+ work need not make them do so.
234
+
235
+ A compilation of a covered work with other separate and independent
236
+ works, which are not by their nature extensions of the covered work,
237
+ and which are not combined with it such as to form a larger program,
238
+ in or on a volume of a storage or distribution medium, is called an
239
+ "aggregate" if the compilation and its resulting copyright are not
240
+ used to limit the access or legal rights of the compilation's users
241
+ beyond what the individual works permit. Inclusion of a covered work
242
+ in an aggregate does not cause this License to apply to the other
243
+ parts of the aggregate.
244
+
245
+ 6. Conveying Non-Source Forms.
246
+
247
+ You may convey a covered work in object code form under the terms
248
+ of sections 4 and 5, provided that you also convey the
249
+ machine-readable Corresponding Source under the terms of this License,
250
+ in one of these ways:
251
+
252
+ a) Convey the object code in, or embodied in, a physical product
253
+ (including a physical distribution medium), accompanied by the
254
+ Corresponding Source fixed on a durable physical medium
255
+ customarily used for software interchange.
256
+
257
+ b) Convey the object code in, or embodied in, a physical product
258
+ (including a physical distribution medium), accompanied by a
259
+ written offer, valid for at least three years and valid for as
260
+ long as you offer spare parts or customer support for that product
261
+ model, to give anyone who possesses the object code either (1) a
262
+ copy of the Corresponding Source for all the software in the
263
+ product that is covered by this License, on a durable physical
264
+ medium customarily used for software interchange, for a price no
265
+ more than your reasonable cost of physically performing this
266
+ conveying of source, or (2) access to copy the
267
+ Corresponding Source from a network server at no charge.
268
+
269
+ c) Convey individual copies of the object code with a copy of the
270
+ written offer to provide the Corresponding Source. This
271
+ alternative is allowed only occasionally and noncommercially, and
272
+ only if you received the object code with such an offer, in accord
273
+ with subsection 6b.
274
+
275
+ d) Convey the object code by offering access from a designated
276
+ place (gratis or for a charge), and offer equivalent access to the
277
+ Corresponding Source in the same way through the same place at no
278
+ further charge. You need not require recipients to copy the
279
+ Corresponding Source along with the object code. If the place to
280
+ copy the object code is a network server, the Corresponding Source
281
+ may be on a different server (operated by you or a third party)
282
+ that supports equivalent copying facilities, provided you maintain
283
+ clear directions next to the object code saying where to find the
284
+ Corresponding Source. Regardless of what server hosts the
285
+ Corresponding Source, you remain obligated to ensure that it is
286
+ available for as long as needed to satisfy these requirements.
287
+
288
+ e) Convey the object code using peer-to-peer transmission, provided
289
+ you inform other peers where the object code and Corresponding
290
+ Source of the work are being offered to the general public at no
291
+ charge under subsection 6d.
292
+
293
+ A separable portion of the object code, whose source code is excluded
294
+ from the Corresponding Source as a System Library, need not be
295
+ included in conveying the object code work.
296
+
297
+ A "User Product" is either (1) a "consumer product", which means any
298
+ tangible personal property which is normally used for personal, family,
299
+ or household purposes, or (2) anything designed or sold for incorporation
300
+ into a dwelling. In determining whether a product is a consumer product,
301
+ doubtful cases shall be resolved in favor of coverage. For a particular
302
+ product received by a particular user, "normally used" refers to a
303
+ typical or common use of that class of product, regardless of the status
304
+ of the particular user or of the way in which the particular user
305
+ actually uses, or expects or is expected to use, the product. A product
306
+ is a consumer product regardless of whether the product has substantial
307
+ commercial, industrial or non-consumer uses, unless such uses represent
308
+ the only significant mode of use of the product.
309
+
310
+ "Installation Information" for a User Product means any methods,
311
+ procedures, authorization keys, or other information required to install
312
+ and execute modified versions of a covered work in that User Product from
313
+ a modified version of its Corresponding Source. The information must
314
+ suffice to ensure that the continued functioning of the modified object
315
+ code is in no case prevented or interfered with solely because
316
+ modification has been made.
317
+
318
+ If you convey an object code work under this section in, or with, or
319
+ specifically for use in, a User Product, and the conveying occurs as
320
+ part of a transaction in which the right of possession and use of the
321
+ User Product is transferred to the recipient in perpetuity or for a
322
+ fixed term (regardless of how the transaction is characterized), the
323
+ Corresponding Source conveyed under this section must be accompanied
324
+ by the Installation Information. But this requirement does not apply
325
+ if neither you nor any third party retains the ability to install
326
+ modified object code on the User Product (for example, the work has
327
+ been installed in ROM).
328
+
329
+ The requirement to provide Installation Information does not include a
330
+ requirement to continue to provide support service, warranty, or updates
331
+ for a work that has been modified or installed by the recipient, or for
332
+ the User Product in which it has been modified or installed. Access to a
333
+ network may be denied when the modification itself materially and
334
+ adversely affects the operation of the network or violates the rules and
335
+ protocols for communication across the network.
336
+
337
+ Corresponding Source conveyed, and Installation Information provided,
338
+ in accord with this section must be in a format that is publicly
339
+ documented (and with an implementation available to the public in
340
+ source code form), and must require no special password or key for
341
+ unpacking, reading or copying.
342
+
343
+ 7. Additional Terms.
344
+
345
+ "Additional permissions" are terms that supplement the terms of this
346
+ License by making exceptions from one or more of its conditions.
347
+ Additional permissions that are applicable to the entire Program shall
348
+ be treated as though they were included in this License, to the extent
349
+ that they are valid under applicable law. If additional permissions
350
+ apply only to part of the Program, that part may be used separately
351
+ under those permissions, but the entire Program remains governed by
352
+ this License without regard to the additional permissions.
353
+
354
+ When you convey a copy of a covered work, you may at your option
355
+ remove any additional permissions from that copy, or from any part of
356
+ it. (Additional permissions may be written to require their own
357
+ removal in certain cases when you modify the work.) You may place
358
+ additional permissions on material, added by you to a covered work,
359
+ for which you have or can give appropriate copyright permission.
360
+
361
+ Notwithstanding any other provision of this License, for material you
362
+ add to a covered work, you may (if authorized by the copyright holders of
363
+ that material) supplement the terms of this License with terms:
364
+
365
+ a) Disclaiming warranty or limiting liability differently from the
366
+ terms of sections 15 and 16 of this License; or
367
+
368
+ b) Requiring preservation of specified reasonable legal notices or
369
+ author attributions in that material or in the Appropriate Legal
370
+ Notices displayed by works containing it; or
371
+
372
+ c) Prohibiting misrepresentation of the origin of that material, or
373
+ requiring that modified versions of such material be marked in
374
+ reasonable ways as different from the original version; or
375
+
376
+ d) Limiting the use for publicity purposes of names of licensors or
377
+ authors of the material; or
378
+
379
+ e) Declining to grant rights under trademark law for use of some
380
+ trade names, trademarks, or service marks; or
381
+
382
+ f) Requiring indemnification of licensors and authors of that
383
+ material by anyone who conveys the material (or modified versions of
384
+ it) with contractual assumptions of liability to the recipient, for
385
+ any liability that these contractual assumptions directly impose on
386
+ those licensors and authors.
387
+
388
+ All other non-permissive additional terms are considered "further
389
+ restrictions" within the meaning of section 10. If the Program as you
390
+ received it, or any part of it, contains a notice stating that it is
391
+ governed by this License along with a term that is a further
392
+ restriction, you may remove that term. If a license document contains
393
+ a further restriction but permits relicensing or conveying under this
394
+ License, you may add to a covered work material governed by the terms
395
+ of that license document, provided that the further restriction does
396
+ not survive such relicensing or conveying.
397
+
398
+ If you add terms to a covered work in accord with this section, you
399
+ must place, in the relevant source files, a statement of the
400
+ additional terms that apply to those files, or a notice indicating
401
+ where to find the applicable terms.
402
+
403
+ Additional terms, permissive or non-permissive, may be stated in the
404
+ form of a separately written license, or stated as exceptions;
405
+ the above requirements apply either way.
406
+
407
+ 8. Termination.
408
+
409
+ You may not propagate or modify a covered work except as expressly
410
+ provided under this License. Any attempt otherwise to propagate or
411
+ modify it is void, and will automatically terminate your rights under
412
+ this License (including any patent licenses granted under the third
413
+ paragraph of section 11).
414
+
415
+ However, if you cease all violation of this License, then your
416
+ license from a particular copyright holder is reinstated (a)
417
+ provisionally, unless and until the copyright holder explicitly and
418
+ finally terminates your license, and (b) permanently, if the copyright
419
+ holder fails to notify you of the violation by some reasonable means
420
+ prior to 60 days after the cessation.
421
+
422
+ Moreover, your license from a particular copyright holder is
423
+ reinstated permanently if the copyright holder notifies you of the
424
+ violation by some reasonable means, this is the first time you have
425
+ received notice of violation of this License (for any work) from that
426
+ copyright holder, and you cure the violation prior to 30 days after
427
+ your receipt of the notice.
428
+
429
+ Termination of your rights under this section does not terminate the
430
+ licenses of parties who have received copies or rights from you under
431
+ this License. If your rights have been terminated and not permanently
432
+ reinstated, you do not qualify to receive new licenses for the same
433
+ material under section 10.
434
+
435
+ 9. Acceptance Not Required for Having Copies.
436
+
437
+ You are not required to accept this License in order to receive or
438
+ run a copy of the Program. Ancillary propagation of a covered work
439
+ occurring solely as a consequence of using peer-to-peer transmission
440
+ to receive a copy likewise does not require acceptance. However,
441
+ nothing other than this License grants you permission to propagate or
442
+ modify any covered work. These actions infringe copyright if you do
443
+ not accept this License. Therefore, by modifying or propagating a
444
+ covered work, you indicate your acceptance of this License to do so.
445
+
446
+ 10. Automatic Licensing of Downstream Recipients.
447
+
448
+ Each time you convey a covered work, the recipient automatically
449
+ receives a license from the original licensors, to run, modify and
450
+ propagate that work, subject to this License. You are not responsible
451
+ for enforcing compliance by third parties with this License.
452
+
453
+ An "entity transaction" is a transaction transferring control of an
454
+ organization, or substantially all assets of one, or subdividing an
455
+ organization, or merging organizations. If propagation of a covered
456
+ work results from an entity transaction, each party to that
457
+ transaction who receives a copy of the work also receives whatever
458
+ licenses to the work the party's predecessor in interest had or could
459
+ give under the previous paragraph, plus a right to possession of the
460
+ Corresponding Source of the work from the predecessor in interest, if
461
+ the predecessor has it or can get it with reasonable efforts.
462
+
463
+ You may not impose any further restrictions on the exercise of the
464
+ rights granted or affirmed under this License. For example, you may
465
+ not impose a license fee, royalty, or other charge for exercise of
466
+ rights granted under this License, and you may not initiate litigation
467
+ (including a cross-claim or counterclaim in a lawsuit) alleging that
468
+ any patent claim is infringed by making, using, selling, offering for
469
+ sale, or importing the Program or any portion of it.
470
+
471
+ 11. Patents.
472
+
473
+ A "contributor" is a copyright holder who authorizes use under this
474
+ License of the Program or a work on which the Program is based. The
475
+ work thus licensed is called the contributor's "contributor version".
476
+
477
+ A contributor's "essential patent claims" are all patent claims
478
+ owned or controlled by the contributor, whether already acquired or
479
+ hereafter acquired, that would be infringed by some manner, permitted
480
+ by this License, of making, using, or selling its contributor version,
481
+ but do not include claims that would be infringed only as a
482
+ consequence of further modification of the contributor version. For
483
+ purposes of this definition, "control" includes the right to grant
484
+ patent sublicenses in a manner consistent with the requirements of
485
+ this License.
486
+
487
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
488
+ patent license under the contributor's essential patent claims, to
489
+ make, use, sell, offer for sale, import and otherwise run, modify and
490
+ propagate the contents of its contributor version.
491
+
492
+ In the following three paragraphs, a "patent license" is any express
493
+ agreement or commitment, however denominated, not to enforce a patent
494
+ (such as an express permission to practice a patent or covenant not to
495
+ sue for patent infringement). To "grant" such a patent license to a
496
+ party means to make such an agreement or commitment not to enforce a
497
+ patent against the party.
498
+
499
+ If you convey a covered work, knowingly relying on a patent license,
500
+ and the Corresponding Source of the work is not available for anyone
501
+ to copy, free of charge and under the terms of this License, through a
502
+ publicly available network server or other readily accessible means,
503
+ then you must either (1) cause the Corresponding Source to be so
504
+ available, or (2) arrange to deprive yourself of the benefit of the
505
+ patent license for this particular work, or (3) arrange, in a manner
506
+ consistent with the requirements of this License, to extend the patent
507
+ license to downstream recipients. "Knowingly relying" means you have
508
+ actual knowledge that, but for the patent license, your conveying the
509
+ covered work in a country, or your recipient's use of the covered work
510
+ in a country, would infringe one or more identifiable patents in that
511
+ country that you have reason to believe are valid.
512
+
513
+ If, pursuant to or in connection with a single transaction or
514
+ arrangement, you convey, or propagate by procuring conveyance of, a
515
+ covered work, and grant a patent license to some of the parties
516
+ receiving the covered work authorizing them to use, propagate, modify
517
+ or convey a specific copy of the covered work, then the patent license
518
+ you grant is automatically extended to all recipients of the covered
519
+ work and works based on it.
520
+
521
+ A patent license is "discriminatory" if it does not include within
522
+ the scope of its coverage, prohibits the exercise of, or is
523
+ conditioned on the non-exercise of one or more of the rights that are
524
+ specifically granted under this License. You may not convey a covered
525
+ work if you are a party to an arrangement with a third party that is
526
+ in the business of distributing software, under which you make payment
527
+ to the third party based on the extent of your activity of conveying
528
+ the work, and under which the third party grants, to any of the
529
+ parties who would receive the covered work from you, a discriminatory
530
+ patent license (a) in connection with copies of the covered work
531
+ conveyed by you (or copies made from those copies), or (b) primarily
532
+ for and in connection with specific products or compilations that
533
+ contain the covered work, unless you entered into that arrangement,
534
+ or that patent license was granted, prior to 28 March 2007.
535
+
536
+ Nothing in this License shall be construed as excluding or limiting
537
+ any implied license or other defenses to infringement that may
538
+ otherwise be available to you under applicable patent law.
539
+
540
+ 12. No Surrender of Others' Freedom.
541
+
542
+ If conditions are imposed on you (whether by court order, agreement or
543
+ otherwise) that contradict the conditions of this License, they do not
544
+ excuse you from the conditions of this License. If you cannot convey a
545
+ covered work so as to satisfy simultaneously your obligations under this
546
+ License and any other pertinent obligations, then as a consequence you may
547
+ not convey it at all. For example, if you agree to terms that obligate you
548
+ to collect a royalty for further conveying from those to whom you convey
549
+ the Program, the only way you could satisfy both those terms and this
550
+ License would be to refrain entirely from conveying the Program.
551
+
552
+ 13. Use with the GNU Affero General Public License.
553
+
554
+ Notwithstanding any other provision of this License, you have
555
+ permission to link or combine any covered work with a work licensed
556
+ under version 3 of the GNU Affero General Public License into a single
557
+ combined work, and to convey the resulting work. The terms of this
558
+ License will continue to apply to the part which is the covered work,
559
+ but the special requirements of the GNU Affero General Public License,
560
+ section 13, concerning interaction through a network will apply to the
561
+ combination as such.
562
+
563
+ 14. Revised Versions of this License.
564
+
565
+ The Free Software Foundation may publish revised and/or new versions of
566
+ the GNU General Public License from time to time. Such new versions will
567
+ be similar in spirit to the present version, but may differ in detail to
568
+ address new problems or concerns.
569
+
570
+ Each version is given a distinguishing version number. If the
571
+ Program specifies that a certain numbered version of the GNU General
572
+ Public License "or any later version" applies to it, you have the
573
+ option of following the terms and conditions either of that numbered
574
+ version or of any later version published by the Free Software
575
+ Foundation. If the Program does not specify a version number of the
576
+ GNU General Public License, you may choose any version ever published
577
+ by the Free Software Foundation.
578
+
579
+ If the Program specifies that a proxy can decide which future
580
+ versions of the GNU General Public License can be used, that proxy's
581
+ public statement of acceptance of a version permanently authorizes you
582
+ to choose that version for the Program.
583
+
584
+ Later license versions may give you additional or different
585
+ permissions. However, no additional obligations are imposed on any
586
+ author or copyright holder as a result of your choosing to follow a
587
+ later version.
588
+
589
+ 15. Disclaimer of Warranty.
590
+
591
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
592
+ APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
593
+ HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
594
+ OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
595
+ THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
596
+ PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
597
+ IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
598
+ ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
599
+
600
+ 16. Limitation of Liability.
601
+
602
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
603
+ WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
604
+ THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
605
+ GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
606
+ USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
607
+ DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
608
+ PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
609
+ EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
610
+ SUCH DAMAGES.
611
+
612
+ 17. Interpretation of Sections 15 and 16.
613
+
614
+ If the disclaimer of warranty and limitation of liability provided
615
+ above cannot be given local legal effect according to their terms,
616
+ reviewing courts shall apply local law that most closely approximates
617
+ an absolute waiver of all civil liability in connection with the
618
+ Program, unless a warranty or assumption of liability accompanies a
619
+ copy of the Program in return for a fee.
620
+
621
+ END OF TERMS AND CONDITIONS
622
+
623
+ How to Apply These Terms to Your New Programs
624
+
625
+ If you develop a new program, and you want it to be of the greatest
626
+ possible use to the public, the best way to achieve this is to make it
627
+ free software which everyone can redistribute and change under these terms.
628
+
629
+ To do so, attach the following notices to the program. It is safest
630
+ to attach them to the start of each source file to most effectively
631
+ state the exclusion of warranty; and each file should have at least
632
+ the "copyright" line and a pointer to where the full notice is found.
633
+
634
+ <one line to give the program's name and a brief idea of what it does.>
635
+ Copyright (C) <year> <name of author>
636
+
637
+ This program is free software: you can redistribute it and/or modify
638
+ it under the terms of the GNU General Public License as published by
639
+ the Free Software Foundation, either version 3 of the License, or
640
+ (at your option) any later version.
641
+
642
+ This program is distributed in the hope that it will be useful,
643
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
644
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
645
+ GNU General Public License for more details.
646
+
647
+ You should have received a copy of the GNU General Public License
648
+ along with this program. If not, see <http://www.gnu.org/licenses/>.
649
+
650
+ Also add information on how to contact you by electronic and paper mail.
651
+
652
+ If the program does terminal interaction, make it output a short
653
+ notice like this when it starts in an interactive mode:
654
+
655
+ <program> Copyright (C) <year> <name of author>
656
+ This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
657
+ This is free software, and you are welcome to redistribute it
658
+ under certain conditions; type `show c' for details.
659
+
660
+ The hypothetical commands `show w' and `show c' should show the appropriate
661
+ parts of the General Public License. Of course, your program's commands
662
+ might be different; for a GUI interface, you would use an "about box".
663
+
664
+ You should also get your employer (if you work as a programmer) or school,
665
+ if any, to sign a "copyright disclaimer" for the program, if necessary.
666
+ For more information on this, and how to apply and follow the GNU GPL, see
667
+ <http://www.gnu.org/licenses/>.
668
+
669
+ The GNU General Public License does not permit incorporating your program
670
+ into proprietary programs. If your program is a subroutine library, you
671
+ may consider it more useful to permit linking proprietary applications with
672
+ the library. If this is what you want to do, use the GNU Lesser General
673
+ Public License instead of this License. But first, please read
674
+ <http://www.gnu.org/philosophy/why-not-lgpl.html>.
README.md CHANGED
@@ -1,20 +1,302 @@
1
- ---
2
- pipeline_tag: object-detection
3
- ---
 
 
4
 
5
- ### How to use
 
 
 
 
 
 
 
 
 
 
6
 
7
- Here is how to use this model:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  ```python
10
- %cd trial-obj-det/yolov5
11
- !pip install -qr requirements.txt # install dependencies (ignore errors)
12
  import torch
13
- from IPython.display import Image, clear_output # to display images
14
- from utils.downloads import attempt_download # to download models/datasets
15
 
16
- # use the best weights!
17
- %cd trial-obj-det/yolov5
18
- !python detect.py --weights ../best.pt --img 416 --conf 0.4 --source img.png
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ```
20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+ <p>
3
+ <a align="left" href="https://ultralytics.com/yolov5" target="_blank">
4
+ <img width="850" src="https://github.com/ultralytics/yolov5/releases/download/v1.0/splash.jpg"></a>
5
+ </p>
6
 
7
+ English | [简体中文](.github/README_cn.md)
8
+ <br>
9
+ <div>
10
+ <a href="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml"><img src="https://github.com/ultralytics/yolov5/actions/workflows/ci-testing.yml/badge.svg" alt="CI CPU testing"></a>
11
+ <a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="YOLOv5 Citation"></a>
12
+ <a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
13
+ <br>
14
+ <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
15
+ <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
16
+ <a href="https://join.slack.com/t/ultralytics/shared_invite/zt-w29ei8bp-jczz7QYUmDtgo6r6KcMIAg"><img src="https://img.shields.io/badge/Slack-Join_Forum-blue.svg?logo=slack" alt="Join Forum"></a>
17
+ </div>
18
 
19
+ <br>
20
+ <p>
21
+ YOLOv5 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents <a href="https://ultralytics.com">Ultralytics</a>
22
+ open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
23
+ </p>
24
+
25
+ <div align="center">
26
+ <a href="https://github.com/ultralytics">
27
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-github.png" width="2%"/>
28
+ </a>
29
+ <img width="2%" />
30
+ <a href="https://www.linkedin.com/company/ultralytics">
31
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-linkedin.png" width="2%"/>
32
+ </a>
33
+ <img width="2%" />
34
+ <a href="https://twitter.com/ultralytics">
35
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-twitter.png" width="2%"/>
36
+ </a>
37
+ <img width="2%" />
38
+ <a href="https://www.producthunt.com/@glenn_jocher">
39
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-producthunt.png" width="2%"/>
40
+ </a>
41
+ <img width="2%" />
42
+ <a href="https://youtube.com/ultralytics">
43
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-youtube.png" width="2%"/>
44
+ </a>
45
+ <img width="2%" />
46
+ <a href="https://www.facebook.com/ultralytics">
47
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-facebook.png" width="2%"/>
48
+ </a>
49
+ <img width="2%" />
50
+ <a href="https://www.instagram.com/ultralytics/">
51
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-instagram.png" width="2%"/>
52
+ </a>
53
+ </div>
54
+
55
+ <!--
56
+ <a align="center" href="https://ultralytics.com/yolov5" target="_blank">
57
+ <img width="800" src="https://github.com/ultralytics/yolov5/releases/download/v1.0/banner-api.png"></a>
58
+ -->
59
+
60
+ </div>
61
+
62
+ ## <div align="center">Documentation</div>
63
+
64
+ See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on training, testing and deployment.
65
+
66
+ ## <div align="center">Quick Start Examples</div>
67
+
68
+ <details open>
69
+ <summary>Install</summary>
70
+
71
+ Clone repo and install [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) in a
72
+ [**Python>=3.7.0**](https://www.python.org/) environment, including
73
+ [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
74
+
75
+ ```bash
76
+ git clone https://github.com/ultralytics/yolov5 # clone
77
+ cd yolov5
78
+ pip install -r requirements.txt # install
79
+ ```
80
+
81
+ </details>
82
+
83
+ <details open>
84
+ <summary>Inference</summary>
85
+
86
+ YOLOv5 [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36) inference. [Models](https://github.com/ultralytics/yolov5/tree/master/models) download automatically from the latest
87
+ YOLOv5 [release](https://github.com/ultralytics/yolov5/releases).
88
 
89
  ```python
 
 
90
  import torch
 
 
91
 
92
+ # Model
93
+ model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5n - yolov5x6, custom
94
+
95
+ # Images
96
+ img = 'https://ultralytics.com/images/zidane.jpg' # or file, Path, PIL, OpenCV, numpy, list
97
+
98
+ # Inference
99
+ results = model(img)
100
+
101
+ # Results
102
+ results.print() # or .show(), .save(), .crop(), .pandas(), etc.
103
+ ```
104
+
105
+ </details>
106
+
107
+ <details>
108
+ <summary>Inference with detect.py</summary>
109
+
110
+ `detect.py` runs inference on a variety of sources, downloading [models](https://github.com/ultralytics/yolov5/tree/master/models) automatically from
111
+ the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
112
+
113
+ ```bash
114
+ python detect.py --source 0 # webcam
115
+ img.jpg # image
116
+ vid.mp4 # video
117
+ path/ # directory
118
+ path/*.jpg # glob
119
+ 'https://youtu.be/Zgi9g1ksQHc' # YouTube
120
+ 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
121
+ ```
122
+
123
+ </details>
124
+
125
+ <details>
126
+ <summary>Training</summary>
127
+
128
+ The commands below reproduce YOLOv5 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh)
129
+ results. [Models](https://github.com/ultralytics/yolov5/tree/master/models)
130
+ and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) download automatically from the latest
131
+ YOLOv5 [release](https://github.com/ultralytics/yolov5/releases). Training times for YOLOv5n/s/m/l/x are
132
+ 1/2/4/6/8 days on a V100 GPU ([Multi-GPU](https://github.com/ultralytics/yolov5/issues/475) times faster). Use the
133
+ largest `--batch-size` possible, or pass `--batch-size -1` for
134
+ YOLOv5 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092). Batch sizes shown for V100-16GB.
135
+
136
+ ```bash
137
+ python train.py --data coco.yaml --cfg yolov5n.yaml --weights '' --batch-size 128
138
+ yolov5s 64
139
+ yolov5m 40
140
+ yolov5l 24
141
+ yolov5x 16
142
  ```
143
 
144
+ <img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
145
+
146
+ </details>
147
+
148
+ <details open>
149
+ <summary>Tutorials</summary>
150
+
151
+ - [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)  🚀 RECOMMENDED
152
+ - [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results)  ☘️
153
+ RECOMMENDED
154
+ - [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289)  🌟 NEW
155
+ - [Roboflow for Datasets, Labeling, and Active Learning](https://github.com/ultralytics/yolov5/issues/4975)  🌟 NEW
156
+ - [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475)
157
+ - [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36)  ⭐ NEW
158
+ - [TFLite, ONNX, CoreML, TensorRT Export](https://github.com/ultralytics/yolov5/issues/251) 🚀
159
+ - [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303)
160
+ - [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318)
161
+ - [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304)
162
+ - [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607)
163
+ - [Transfer Learning with Frozen Layers](https://github.com/ultralytics/yolov5/issues/1314)  ⭐ NEW
164
+ - [Architecture Summary](https://github.com/ultralytics/yolov5/issues/6998)  ⭐ NEW
165
+
166
+ </details>
167
+
168
+ ## <div align="center">Environments</div>
169
+
170
+ Get started in seconds with our verified environments. Click each icon below for details.
171
+
172
+ <div align="center">
173
+ <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb">
174
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-colab-small.png" width="15%"/>
175
+ </a>
176
+ <a href="https://www.kaggle.com/ultralytics/yolov5">
177
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-kaggle-small.png" width="15%"/>
178
+ </a>
179
+ <a href="https://hub.docker.com/r/ultralytics/yolov5">
180
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-docker-small.png" width="15%"/>
181
+ </a>
182
+ <a href="https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart">
183
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-aws-small.png" width="15%"/>
184
+ </a>
185
+ <a href="https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart">
186
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-gcp-small.png" width="15%"/>
187
+ </a>
188
+ </div>
189
+
190
+ ## <div align="center">Integrations</div>
191
+
192
+ <div align="center">
193
+ <a href="https://wandb.ai/site?utm_campaign=repo_yolo_readme">
194
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-wb-long.png" width="49%"/>
195
+ </a>
196
+ <a href="https://roboflow.com/?ref=ultralytics">
197
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-roboflow-long.png" width="49%"/>
198
+ </a>
199
+ </div>
200
+
201
+ |Weights and Biases|Roboflow ⭐ NEW|
202
+ |:-:|:-:|
203
+ |Automatically track and visualize all your YOLOv5 training runs in the cloud with [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme)|Label and export your custom datasets directly to YOLOv5 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) |
204
+
205
+ <!-- ## <div align="center">Compete and Win</div>
206
+
207
+ We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10,000** in cash prizes!
208
+
209
+ <p align="center">
210
+ <a href="https://github.com/ultralytics/yolov5/discussions/3213">
211
+ <img width="850" src="https://github.com/ultralytics/yolov5/releases/download/v1.0/banner-export-competition.png"></a>
212
+ </p> -->
213
+
214
+ ## <div align="center">Why YOLOv5</div>
215
+
216
+ <p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040763-93c22a27-347c-4e3c-847a-8094621d3f4e.png"></p>
217
+ <details>
218
+ <summary>YOLOv5-P5 640 Figure (click to expand)</summary>
219
+
220
+ <p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040757-ce0934a3-06a6-43dc-a979-2edbbd69ea0e.png"></p>
221
+ </details>
222
+ <details>
223
+ <summary>Figure Notes (click to expand)</summary>
224
+
225
+ - **COCO AP val** denotes mAP@0.5:0.95 metric measured on the 5000-image [COCO val2017](http://cocodataset.org) dataset over various inference sizes from 256 to 1536.
226
+ - **GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100 instance at batch-size 32.
227
+ - **EfficientDet** data from [google/automl](https://github.com/google/automl) at batch size 8.
228
+ - **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
229
+
230
+ </details>
231
+
232
+ ### Pretrained Checkpoints
233
+
234
+ |Model |size<br><sup>(pixels) |mAP<sup>val<br>0.5:0.95 |mAP<sup>val<br>0.5 |Speed<br><sup>CPU b1<br>(ms) |Speed<br><sup>V100 b1<br>(ms) |Speed<br><sup>V100 b32<br>(ms) |params<br><sup>(M) |FLOPs<br><sup>@640 (B)
235
+ |--- |--- |--- |--- |--- |--- |--- |--- |---
236
+ |[YOLOv5n][assets] |640 |28.0 |45.7 |**45** |**6.3**|**0.6**|**1.9**|**4.5**
237
+ |[YOLOv5s][assets] |640 |37.4 |56.8 |98 |6.4 |0.9 |7.2 |16.5
238
+ |[YOLOv5m][assets] |640 |45.4 |64.1 |224 |8.2 |1.7 |21.2 |49.0
239
+ |[YOLOv5l][assets] |640 |49.0 |67.3 |430 |10.1 |2.7 |46.5 |109.1
240
+ |[YOLOv5x][assets] |640 |50.7 |68.9 |766 |12.1 |4.8 |86.7 |205.7
241
+ | | | | | | | | |
242
+ |[YOLOv5n6][assets] |1280 |36.0 |54.4 |153 |8.1 |2.1 |3.2 |4.6
243
+ |[YOLOv5s6][assets] |1280 |44.8 |63.7 |385 |8.2 |3.6 |12.6 |16.8
244
+ |[YOLOv5m6][assets] |1280 |51.3 |69.3 |887 |11.1 |6.8 |35.7 |50.0
245
+ |[YOLOv5l6][assets] |1280 |53.7 |71.3 |1784 |15.8 |10.5 |76.8 |111.4
246
+ |[YOLOv5x6][assets]<br>+ [TTA][TTA]|1280<br>1536 |55.0<br>**55.8** |72.7<br>**72.7** |3136<br>- |26.2<br>- |19.4<br>- |140.7<br>- |209.8<br>-
247
+
248
+ <details>
249
+ <summary>Table Notes (click to expand)</summary>
250
+
251
+ - All checkpoints are trained to 300 epochs with default settings. Nano and Small models use [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) hyps, all others use [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml).
252
+ - **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.<br>Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
253
+ - **Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) instance. NMS times (~1 ms/img) not included.<br>Reproduce by `python val.py --data coco.yaml --img 640 --task speed --batch 1`
254
+ - **TTA** [Test Time Augmentation](https://github.com/ultralytics/yolov5/issues/303) includes reflection and scale augmentations.<br>Reproduce by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
255
+
256
+ </details>
257
+
258
+ ## <div align="center">Contribute</div>
259
+
260
+ We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible. Please see our [Contributing Guide](CONTRIBUTING.md) to get started, and fill out the [YOLOv5 Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experiences. Thank you to all our contributors!
261
+
262
+ <a href="https://github.com/ultralytics/yolov5/graphs/contributors"><img src="https://opencollective.com/ultralytics/contributors.svg?width=990" /></a>
263
+
264
+ ## <div align="center">Contact</div>
265
+
266
+ For YOLOv5 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For business inquiries or
267
+ professional support requests please visit [https://ultralytics.com/contact](https://ultralytics.com/contact).
268
+
269
+ <br>
270
+
271
+ <div align="center">
272
+ <a href="https://github.com/ultralytics">
273
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-github.png" width="3%"/>
274
+ </a>
275
+ <img width="3%" />
276
+ <a href="https://www.linkedin.com/company/ultralytics">
277
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-linkedin.png" width="3%"/>
278
+ </a>
279
+ <img width="3%" />
280
+ <a href="https://twitter.com/ultralytics">
281
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-twitter.png" width="3%"/>
282
+ </a>
283
+ <img width="3%" />
284
+ <a href="https://www.producthunt.com/@glenn_jocher">
285
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-producthunt.png" width="3%"/>
286
+ </a>
287
+ <img width="3%" />
288
+ <a href="https://youtube.com/ultralytics">
289
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-youtube.png" width="3%"/>
290
+ </a>
291
+ <img width="3%" />
292
+ <a href="https://www.facebook.com/ultralytics">
293
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-facebook.png" width="3%"/>
294
+ </a>
295
+ <img width="3%" />
296
+ <a href="https://www.instagram.com/ultralytics/">
297
+ <img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-instagram.png" width="3%"/>
298
+ </a>
299
+ </div>
300
+
301
+ [assets]: https://github.com/ultralytics/yolov5/releases
302
+ [tta]: https://github.com/ultralytics/yolov5/issues/303
data/Argoverse.yaml ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # Argoverse-HD dataset (ring-front-center camera) http://www.cs.cmu.edu/~mengtial/proj/streaming/ by Argo AI
3
+ # Example usage: python train.py --data Argoverse.yaml
4
+ # parent
5
+ # ├── yolov5
6
+ # └── datasets
7
+ # └── Argoverse ← downloads here (31.3 GB)
8
+
9
+
10
+ # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
11
+ path: ../datasets/Argoverse # dataset root dir
12
+ train: Argoverse-1.1/images/train/ # train images (relative to 'path') 39384 images
13
+ val: Argoverse-1.1/images/val/ # val images (relative to 'path') 15062 images
14
+ test: Argoverse-1.1/images/test/ # test images (optional) https://eval.ai/web/challenges/challenge-page/800/overview
15
+
16
+ # Classes
17
+ nc: 8 # number of classes
18
+ names: ['person', 'bicycle', 'car', 'motorcycle', 'bus', 'truck', 'traffic_light', 'stop_sign'] # class names
19
+
20
+
21
+ # Download script/URL (optional) ---------------------------------------------------------------------------------------
22
+ download: |
23
+ import json
24
+
25
+ from tqdm import tqdm
26
+ from utils.general import download, Path
27
+
28
+
29
+ def argoverse2yolo(set):
30
+ labels = {}
31
+ a = json.load(open(set, "rb"))
32
+ for annot in tqdm(a['annotations'], desc=f"Converting {set} to YOLOv5 format..."):
33
+ img_id = annot['image_id']
34
+ img_name = a['images'][img_id]['name']
35
+ img_label_name = f'{img_name[:-3]}txt'
36
+
37
+ cls = annot['category_id'] # instance class id
38
+ x_center, y_center, width, height = annot['bbox']
39
+ x_center = (x_center + width / 2) / 1920.0 # offset and scale
40
+ y_center = (y_center + height / 2) / 1200.0 # offset and scale
41
+ width /= 1920.0 # scale
42
+ height /= 1200.0 # scale
43
+
44
+ img_dir = set.parents[2] / 'Argoverse-1.1' / 'labels' / a['seq_dirs'][a['images'][annot['image_id']]['sid']]
45
+ if not img_dir.exists():
46
+ img_dir.mkdir(parents=True, exist_ok=True)
47
+
48
+ k = str(img_dir / img_label_name)
49
+ if k not in labels:
50
+ labels[k] = []
51
+ labels[k].append(f"{cls} {x_center} {y_center} {width} {height}\n")
52
+
53
+ for k in labels:
54
+ with open(k, "w") as f:
55
+ f.writelines(labels[k])
56
+
57
+
58
+ # Download
59
+ dir = Path('../datasets/Argoverse') # dataset root dir
60
+ urls = ['https://argoverse-hd.s3.us-east-2.amazonaws.com/Argoverse-HD-Full.zip']
61
+ download(urls, dir=dir, delete=False)
62
+
63
+ # Convert
64
+ annotations_dir = 'Argoverse-HD/annotations/'
65
+ (dir / 'Argoverse-1.1' / 'tracking').rename(dir / 'Argoverse-1.1' / 'images') # rename 'tracking' to 'images'
66
+ for d in "train.json", "val.json":
67
+ argoverse2yolo(dir / annotations_dir / d) # convert VisDrone annotations to YOLO labels
data/GlobalWheat2020.yaml ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # Global Wheat 2020 dataset http://www.global-wheat.com/ by University of Saskatchewan
3
+ # Example usage: python train.py --data GlobalWheat2020.yaml
4
+ # parent
5
+ # ├── yolov5
6
+ # └── datasets
7
+ # └── GlobalWheat2020 ← downloads here (7.0 GB)
8
+
9
+
10
+ # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
11
+ path: ../datasets/GlobalWheat2020 # dataset root dir
12
+ train: # train images (relative to 'path') 3422 images
13
+ - images/arvalis_1
14
+ - images/arvalis_2
15
+ - images/arvalis_3
16
+ - images/ethz_1
17
+ - images/rres_1
18
+ - images/inrae_1
19
+ - images/usask_1
20
+ val: # val images (relative to 'path') 748 images (WARNING: train set contains ethz_1)
21
+ - images/ethz_1
22
+ test: # test images (optional) 1276 images
23
+ - images/utokyo_1
24
+ - images/utokyo_2
25
+ - images/nau_1
26
+ - images/uq_1
27
+
28
+ # Classes
29
+ nc: 1 # number of classes
30
+ names: ['wheat_head'] # class names
31
+
32
+
33
+ # Download script/URL (optional) ---------------------------------------------------------------------------------------
34
+ download: |
35
+ from utils.general import download, Path
36
+
37
+
38
+ # Download
39
+ dir = Path(yaml['path']) # dataset root dir
40
+ urls = ['https://zenodo.org/record/4298502/files/global-wheat-codalab-official.zip',
41
+ 'https://github.com/ultralytics/yolov5/releases/download/v1.0/GlobalWheat2020_labels.zip']
42
+ download(urls, dir=dir)
43
+
44
+ # Make Directories
45
+ for p in 'annotations', 'images', 'labels':
46
+ (dir / p).mkdir(parents=True, exist_ok=True)
47
+
48
+ # Move
49
+ for p in 'arvalis_1', 'arvalis_2', 'arvalis_3', 'ethz_1', 'rres_1', 'inrae_1', 'usask_1', \
50
+ 'utokyo_1', 'utokyo_2', 'nau_1', 'uq_1':
51
+ (dir / p).rename(dir / 'images' / p) # move to /images
52
+ f = (dir / p).with_suffix('.json') # json file
53
+ if f.exists():
54
+ f.rename((dir / 'annotations' / p).with_suffix('.json')) # move to /annotations
data/Objects365.yaml ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # Objects365 dataset https://www.objects365.org/ by Megvii
3
+ # Example usage: python train.py --data Objects365.yaml
4
+ # parent
5
+ # ├── yolov5
6
+ # └── datasets
7
+ # └── Objects365 ← downloads here (712 GB = 367G data + 345G zips)
8
+
9
+
10
+ # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
11
+ path: ../datasets/Objects365 # dataset root dir
12
+ train: images/train # train images (relative to 'path') 1742289 images
13
+ val: images/val # val images (relative to 'path') 80000 images
14
+ test: # test images (optional)
15
+
16
+ # Classes
17
+ nc: 365 # number of classes
18
+ names: ['Person', 'Sneakers', 'Chair', 'Other Shoes', 'Hat', 'Car', 'Lamp', 'Glasses', 'Bottle', 'Desk', 'Cup',
19
+ 'Street Lights', 'Cabinet/shelf', 'Handbag/Satchel', 'Bracelet', 'Plate', 'Picture/Frame', 'Helmet', 'Book',
20
+ 'Gloves', 'Storage box', 'Boat', 'Leather Shoes', 'Flower', 'Bench', 'Potted Plant', 'Bowl/Basin', 'Flag',
21
+ 'Pillow', 'Boots', 'Vase', 'Microphone', 'Necklace', 'Ring', 'SUV', 'Wine Glass', 'Belt', 'Monitor/TV',
22
+ 'Backpack', 'Umbrella', 'Traffic Light', 'Speaker', 'Watch', 'Tie', 'Trash bin Can', 'Slippers', 'Bicycle',
23
+ 'Stool', 'Barrel/bucket', 'Van', 'Couch', 'Sandals', 'Basket', 'Drum', 'Pen/Pencil', 'Bus', 'Wild Bird',
24
+ 'High Heels', 'Motorcycle', 'Guitar', 'Carpet', 'Cell Phone', 'Bread', 'Camera', 'Canned', 'Truck',
25
+ 'Traffic cone', 'Cymbal', 'Lifesaver', 'Towel', 'Stuffed Toy', 'Candle', 'Sailboat', 'Laptop', 'Awning',
26
+ 'Bed', 'Faucet', 'Tent', 'Horse', 'Mirror', 'Power outlet', 'Sink', 'Apple', 'Air Conditioner', 'Knife',
27
+ 'Hockey Stick', 'Paddle', 'Pickup Truck', 'Fork', 'Traffic Sign', 'Balloon', 'Tripod', 'Dog', 'Spoon', 'Clock',
28
+ 'Pot', 'Cow', 'Cake', 'Dinning Table', 'Sheep', 'Hanger', 'Blackboard/Whiteboard', 'Napkin', 'Other Fish',
29
+ 'Orange/Tangerine', 'Toiletry', 'Keyboard', 'Tomato', 'Lantern', 'Machinery Vehicle', 'Fan',
30
+ 'Green Vegetables', 'Banana', 'Baseball Glove', 'Airplane', 'Mouse', 'Train', 'Pumpkin', 'Soccer', 'Skiboard',
31
+ 'Luggage', 'Nightstand', 'Tea pot', 'Telephone', 'Trolley', 'Head Phone', 'Sports Car', 'Stop Sign',
32
+ 'Dessert', 'Scooter', 'Stroller', 'Crane', 'Remote', 'Refrigerator', 'Oven', 'Lemon', 'Duck', 'Baseball Bat',
33
+ 'Surveillance Camera', 'Cat', 'Jug', 'Broccoli', 'Piano', 'Pizza', 'Elephant', 'Skateboard', 'Surfboard',
34
+ 'Gun', 'Skating and Skiing shoes', 'Gas stove', 'Donut', 'Bow Tie', 'Carrot', 'Toilet', 'Kite', 'Strawberry',
35
+ 'Other Balls', 'Shovel', 'Pepper', 'Computer Box', 'Toilet Paper', 'Cleaning Products', 'Chopsticks',
36
+ 'Microwave', 'Pigeon', 'Baseball', 'Cutting/chopping Board', 'Coffee Table', 'Side Table', 'Scissors',
37
+ 'Marker', 'Pie', 'Ladder', 'Snowboard', 'Cookies', 'Radiator', 'Fire Hydrant', 'Basketball', 'Zebra', 'Grape',
38
+ 'Giraffe', 'Potato', 'Sausage', 'Tricycle', 'Violin', 'Egg', 'Fire Extinguisher', 'Candy', 'Fire Truck',
39
+ 'Billiards', 'Converter', 'Bathtub', 'Wheelchair', 'Golf Club', 'Briefcase', 'Cucumber', 'Cigar/Cigarette',
40
+ 'Paint Brush', 'Pear', 'Heavy Truck', 'Hamburger', 'Extractor', 'Extension Cord', 'Tong', 'Tennis Racket',
41
+ 'Folder', 'American Football', 'earphone', 'Mask', 'Kettle', 'Tennis', 'Ship', 'Swing', 'Coffee Machine',
42
+ 'Slide', 'Carriage', 'Onion', 'Green beans', 'Projector', 'Frisbee', 'Washing Machine/Drying Machine',
43
+ 'Chicken', 'Printer', 'Watermelon', 'Saxophone', 'Tissue', 'Toothbrush', 'Ice cream', 'Hot-air balloon',
44
+ 'Cello', 'French Fries', 'Scale', 'Trophy', 'Cabbage', 'Hot dog', 'Blender', 'Peach', 'Rice', 'Wallet/Purse',
45
+ 'Volleyball', 'Deer', 'Goose', 'Tape', 'Tablet', 'Cosmetics', 'Trumpet', 'Pineapple', 'Golf Ball',
46
+ 'Ambulance', 'Parking meter', 'Mango', 'Key', 'Hurdle', 'Fishing Rod', 'Medal', 'Flute', 'Brush', 'Penguin',
47
+ 'Megaphone', 'Corn', 'Lettuce', 'Garlic', 'Swan', 'Helicopter', 'Green Onion', 'Sandwich', 'Nuts',
48
+ 'Speed Limit Sign', 'Induction Cooker', 'Broom', 'Trombone', 'Plum', 'Rickshaw', 'Goldfish', 'Kiwi fruit',
49
+ 'Router/modem', 'Poker Card', 'Toaster', 'Shrimp', 'Sushi', 'Cheese', 'Notepaper', 'Cherry', 'Pliers', 'CD',
50
+ 'Pasta', 'Hammer', 'Cue', 'Avocado', 'Hamimelon', 'Flask', 'Mushroom', 'Screwdriver', 'Soap', 'Recorder',
51
+ 'Bear', 'Eggplant', 'Board Eraser', 'Coconut', 'Tape Measure/Ruler', 'Pig', 'Showerhead', 'Globe', 'Chips',
52
+ 'Steak', 'Crosswalk Sign', 'Stapler', 'Camel', 'Formula 1', 'Pomegranate', 'Dishwasher', 'Crab',
53
+ 'Hoverboard', 'Meat ball', 'Rice Cooker', 'Tuba', 'Calculator', 'Papaya', 'Antelope', 'Parrot', 'Seal',
54
+ 'Butterfly', 'Dumbbell', 'Donkey', 'Lion', 'Urinal', 'Dolphin', 'Electric Drill', 'Hair Dryer', 'Egg tart',
55
+ 'Jellyfish', 'Treadmill', 'Lighter', 'Grapefruit', 'Game board', 'Mop', 'Radish', 'Baozi', 'Target', 'French',
56
+ 'Spring Rolls', 'Monkey', 'Rabbit', 'Pencil Case', 'Yak', 'Red Cabbage', 'Binoculars', 'Asparagus', 'Barbell',
57
+ 'Scallop', 'Noddles', 'Comb', 'Dumpling', 'Oyster', 'Table Tennis paddle', 'Cosmetics Brush/Eyeliner Pencil',
58
+ 'Chainsaw', 'Eraser', 'Lobster', 'Durian', 'Okra', 'Lipstick', 'Cosmetics Mirror', 'Curling', 'Table Tennis']
59
+
60
+
61
+ # Download script/URL (optional) ---------------------------------------------------------------------------------------
62
+ download: |
63
+ from tqdm import tqdm
64
+
65
+ from utils.general import Path, check_requirements, download, np, xyxy2xywhn
66
+
67
+ check_requirements(('pycocotools>=2.0',))
68
+ from pycocotools.coco import COCO
69
+
70
+ # Make Directories
71
+ dir = Path(yaml['path']) # dataset root dir
72
+ for p in 'images', 'labels':
73
+ (dir / p).mkdir(parents=True, exist_ok=True)
74
+ for q in 'train', 'val':
75
+ (dir / p / q).mkdir(parents=True, exist_ok=True)
76
+
77
+ # Train, Val Splits
78
+ for split, patches in [('train', 50 + 1), ('val', 43 + 1)]:
79
+ print(f"Processing {split} in {patches} patches ...")
80
+ images, labels = dir / 'images' / split, dir / 'labels' / split
81
+
82
+ # Download
83
+ url = f"https://dorc.ks3-cn-beijing.ksyun.com/data-set/2020Objects365%E6%95%B0%E6%8D%AE%E9%9B%86/{split}/"
84
+ if split == 'train':
85
+ download([f'{url}zhiyuan_objv2_{split}.tar.gz'], dir=dir, delete=False) # annotations json
86
+ download([f'{url}patch{i}.tar.gz' for i in range(patches)], dir=images, curl=True, delete=False, threads=8)
87
+ elif split == 'val':
88
+ download([f'{url}zhiyuan_objv2_{split}.json'], dir=dir, delete=False) # annotations json
89
+ download([f'{url}images/v1/patch{i}.tar.gz' for i in range(15 + 1)], dir=images, curl=True, delete=False, threads=8)
90
+ download([f'{url}images/v2/patch{i}.tar.gz' for i in range(16, patches)], dir=images, curl=True, delete=False, threads=8)
91
+
92
+ # Move
93
+ for f in tqdm(images.rglob('*.jpg'), desc=f'Moving {split} images'):
94
+ f.rename(images / f.name) # move to /images/{split}
95
+
96
+ # Labels
97
+ coco = COCO(dir / f'zhiyuan_objv2_{split}.json')
98
+ names = [x["name"] for x in coco.loadCats(coco.getCatIds())]
99
+ for cid, cat in enumerate(names):
100
+ catIds = coco.getCatIds(catNms=[cat])
101
+ imgIds = coco.getImgIds(catIds=catIds)
102
+ for im in tqdm(coco.loadImgs(imgIds), desc=f'Class {cid + 1}/{len(names)} {cat}'):
103
+ width, height = im["width"], im["height"]
104
+ path = Path(im["file_name"]) # image filename
105
+ try:
106
+ with open(labels / path.with_suffix('.txt').name, 'a') as file:
107
+ annIds = coco.getAnnIds(imgIds=im["id"], catIds=catIds, iscrowd=None)
108
+ for a in coco.loadAnns(annIds):
109
+ x, y, w, h = a['bbox'] # bounding box in xywh (xy top-left corner)
110
+ xyxy = np.array([x, y, x + w, y + h])[None] # pixels(1,4)
111
+ x, y, w, h = xyxy2xywhn(xyxy, w=width, h=height, clip=True)[0] # normalized and clipped
112
+ file.write(f"{cid} {x:.5f} {y:.5f} {w:.5f} {h:.5f}\n")
113
+ except Exception as e:
114
+ print(e)
data/SKU-110K.yaml ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # SKU-110K retail items dataset https://github.com/eg4000/SKU110K_CVPR19 by Trax Retail
3
+ # Example usage: python train.py --data SKU-110K.yaml
4
+ # parent
5
+ # ├── yolov5
6
+ # └── datasets
7
+ # └── SKU-110K ← downloads here (13.6 GB)
8
+
9
+
10
+ # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
11
+ path: ../datasets/SKU-110K # dataset root dir
12
+ train: train.txt # train images (relative to 'path') 8219 images
13
+ val: val.txt # val images (relative to 'path') 588 images
14
+ test: test.txt # test images (optional) 2936 images
15
+
16
+ # Classes
17
+ nc: 1 # number of classes
18
+ names: ['object'] # class names
19
+
20
+
21
+ # Download script/URL (optional) ---------------------------------------------------------------------------------------
22
+ download: |
23
+ import shutil
24
+ from tqdm import tqdm
25
+ from utils.general import np, pd, Path, download, xyxy2xywh
26
+
27
+
28
+ # Download
29
+ dir = Path(yaml['path']) # dataset root dir
30
+ parent = Path(dir.parent) # download dir
31
+ urls = ['http://trax-geometry.s3.amazonaws.com/cvpr_challenge/SKU110K_fixed.tar.gz']
32
+ download(urls, dir=parent, delete=False)
33
+
34
+ # Rename directories
35
+ if dir.exists():
36
+ shutil.rmtree(dir)
37
+ (parent / 'SKU110K_fixed').rename(dir) # rename dir
38
+ (dir / 'labels').mkdir(parents=True, exist_ok=True) # create labels dir
39
+
40
+ # Convert labels
41
+ names = 'image', 'x1', 'y1', 'x2', 'y2', 'class', 'image_width', 'image_height' # column names
42
+ for d in 'annotations_train.csv', 'annotations_val.csv', 'annotations_test.csv':
43
+ x = pd.read_csv(dir / 'annotations' / d, names=names).values # annotations
44
+ images, unique_images = x[:, 0], np.unique(x[:, 0])
45
+ with open((dir / d).with_suffix('.txt').__str__().replace('annotations_', ''), 'w') as f:
46
+ f.writelines(f'./images/{s}\n' for s in unique_images)
47
+ for im in tqdm(unique_images, desc=f'Converting {dir / d}'):
48
+ cls = 0 # single-class dataset
49
+ with open((dir / 'labels' / im).with_suffix('.txt'), 'a') as f:
50
+ for r in x[images == im]:
51
+ w, h = r[6], r[7] # image width, height
52
+ xywh = xyxy2xywh(np.array([[r[1] / w, r[2] / h, r[3] / w, r[4] / h]]))[0] # instance
53
+ f.write(f"{cls} {xywh[0]:.5f} {xywh[1]:.5f} {xywh[2]:.5f} {xywh[3]:.5f}\n") # write label
data/VOC.yaml ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # PASCAL VOC dataset http://host.robots.ox.ac.uk/pascal/VOC by University of Oxford
3
+ # Example usage: python train.py --data VOC.yaml
4
+ # parent
5
+ # ├── yolov5
6
+ # └── datasets
7
+ # └── VOC ← downloads here (2.8 GB)
8
+
9
+
10
+ # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
11
+ path: ../datasets/VOC
12
+ train: # train images (relative to 'path') 16551 images
13
+ - images/train2012
14
+ - images/train2007
15
+ - images/val2012
16
+ - images/val2007
17
+ val: # val images (relative to 'path') 4952 images
18
+ - images/test2007
19
+ test: # test images (optional)
20
+ - images/test2007
21
+
22
+ # Classes
23
+ nc: 20 # number of classes
24
+ names: ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',
25
+ 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] # class names
26
+
27
+
28
+ # Download script/URL (optional) ---------------------------------------------------------------------------------------
29
+ download: |
30
+ import xml.etree.ElementTree as ET
31
+
32
+ from tqdm import tqdm
33
+ from utils.general import download, Path
34
+
35
+
36
+ def convert_label(path, lb_path, year, image_id):
37
+ def convert_box(size, box):
38
+ dw, dh = 1. / size[0], 1. / size[1]
39
+ x, y, w, h = (box[0] + box[1]) / 2.0 - 1, (box[2] + box[3]) / 2.0 - 1, box[1] - box[0], box[3] - box[2]
40
+ return x * dw, y * dh, w * dw, h * dh
41
+
42
+ in_file = open(path / f'VOC{year}/Annotations/{image_id}.xml')
43
+ out_file = open(lb_path, 'w')
44
+ tree = ET.parse(in_file)
45
+ root = tree.getroot()
46
+ size = root.find('size')
47
+ w = int(size.find('width').text)
48
+ h = int(size.find('height').text)
49
+
50
+ for obj in root.iter('object'):
51
+ cls = obj.find('name').text
52
+ if cls in yaml['names'] and not int(obj.find('difficult').text) == 1:
53
+ xmlbox = obj.find('bndbox')
54
+ bb = convert_box((w, h), [float(xmlbox.find(x).text) for x in ('xmin', 'xmax', 'ymin', 'ymax')])
55
+ cls_id = yaml['names'].index(cls) # class id
56
+ out_file.write(" ".join([str(a) for a in (cls_id, *bb)]) + '\n')
57
+
58
+
59
+ # Download
60
+ dir = Path(yaml['path']) # dataset root dir
61
+ url = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/'
62
+ urls = [f'{url}VOCtrainval_06-Nov-2007.zip', # 446MB, 5012 images
63
+ f'{url}VOCtest_06-Nov-2007.zip', # 438MB, 4953 images
64
+ f'{url}VOCtrainval_11-May-2012.zip'] # 1.95GB, 17126 images
65
+ download(urls, dir=dir / 'images', delete=False, curl=True, threads=3)
66
+
67
+ # Convert
68
+ path = dir / 'images/VOCdevkit'
69
+ for year, image_set in ('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test'):
70
+ imgs_path = dir / 'images' / f'{image_set}{year}'
71
+ lbs_path = dir / 'labels' / f'{image_set}{year}'
72
+ imgs_path.mkdir(exist_ok=True, parents=True)
73
+ lbs_path.mkdir(exist_ok=True, parents=True)
74
+
75
+ with open(path / f'VOC{year}/ImageSets/Main/{image_set}.txt') as f:
76
+ image_ids = f.read().strip().split()
77
+ for id in tqdm(image_ids, desc=f'{image_set}{year}'):
78
+ f = path / f'VOC{year}/JPEGImages/{id}.jpg' # old img path
79
+ lb_path = (lbs_path / f.name).with_suffix('.txt') # new label path
80
+ f.rename(imgs_path / f.name) # move image
81
+ convert_label(path, lb_path, year, id) # convert labels to YOLO format
data/VisDrone.yaml ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # VisDrone2019-DET dataset https://github.com/VisDrone/VisDrone-Dataset by Tianjin University
3
+ # Example usage: python train.py --data VisDrone.yaml
4
+ # parent
5
+ # ├── yolov5
6
+ # └── datasets
7
+ # └── VisDrone ← downloads here (2.3 GB)
8
+
9
+
10
+ # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
11
+ path: ../datasets/VisDrone # dataset root dir
12
+ train: VisDrone2019-DET-train/images # train images (relative to 'path') 6471 images
13
+ val: VisDrone2019-DET-val/images # val images (relative to 'path') 548 images
14
+ test: VisDrone2019-DET-test-dev/images # test images (optional) 1610 images
15
+
16
+ # Classes
17
+ nc: 10 # number of classes
18
+ names: ['pedestrian', 'people', 'bicycle', 'car', 'van', 'truck', 'tricycle', 'awning-tricycle', 'bus', 'motor']
19
+
20
+
21
+ # Download script/URL (optional) ---------------------------------------------------------------------------------------
22
+ download: |
23
+ from utils.general import download, os, Path
24
+
25
+ def visdrone2yolo(dir):
26
+ from PIL import Image
27
+ from tqdm import tqdm
28
+
29
+ def convert_box(size, box):
30
+ # Convert VisDrone box to YOLO xywh box
31
+ dw = 1. / size[0]
32
+ dh = 1. / size[1]
33
+ return (box[0] + box[2] / 2) * dw, (box[1] + box[3] / 2) * dh, box[2] * dw, box[3] * dh
34
+
35
+ (dir / 'labels').mkdir(parents=True, exist_ok=True) # make labels directory
36
+ pbar = tqdm((dir / 'annotations').glob('*.txt'), desc=f'Converting {dir}')
37
+ for f in pbar:
38
+ img_size = Image.open((dir / 'images' / f.name).with_suffix('.jpg')).size
39
+ lines = []
40
+ with open(f, 'r') as file: # read annotation.txt
41
+ for row in [x.split(',') for x in file.read().strip().splitlines()]:
42
+ if row[4] == '0': # VisDrone 'ignored regions' class 0
43
+ continue
44
+ cls = int(row[5]) - 1
45
+ box = convert_box(img_size, tuple(map(int, row[:4])))
46
+ lines.append(f"{cls} {' '.join(f'{x:.6f}' for x in box)}\n")
47
+ with open(str(f).replace(os.sep + 'annotations' + os.sep, os.sep + 'labels' + os.sep), 'w') as fl:
48
+ fl.writelines(lines) # write label.txt
49
+
50
+
51
+ # Download
52
+ dir = Path(yaml['path']) # dataset root dir
53
+ urls = ['https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-train.zip',
54
+ 'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-val.zip',
55
+ 'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-dev.zip',
56
+ 'https://github.com/ultralytics/yolov5/releases/download/v1.0/VisDrone2019-DET-test-challenge.zip']
57
+ download(urls, dir=dir, curl=True, threads=4)
58
+
59
+ # Convert
60
+ for d in 'VisDrone2019-DET-train', 'VisDrone2019-DET-val', 'VisDrone2019-DET-test-dev':
61
+ visdrone2yolo(dir / d) # convert VisDrone annotations to YOLO labels
data/coco.yaml ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # COCO 2017 dataset http://cocodataset.org by Microsoft
3
+ # Example usage: python train.py --data coco.yaml
4
+ # parent
5
+ # ├── yolov5
6
+ # └── datasets
7
+ # └── coco ← downloads here (20.1 GB)
8
+
9
+
10
+ # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
11
+ path: ../datasets/coco # dataset root dir
12
+ train: train2017.txt # train images (relative to 'path') 118287 images
13
+ val: val2017.txt # val images (relative to 'path') 5000 images
14
+ test: test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794
15
+
16
+ # Classes
17
+ nc: 80 # number of classes
18
+ names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
19
+ 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
20
+ 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
21
+ 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
22
+ 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
23
+ 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
24
+ 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
25
+ 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
26
+ 'hair drier', 'toothbrush'] # class names
27
+
28
+
29
+ # Download script/URL (optional)
30
+ download: |
31
+ from utils.general import download, Path
32
+
33
+
34
+ # Download labels
35
+ segments = False # segment or box labels
36
+ dir = Path(yaml['path']) # dataset root dir
37
+ url = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/'
38
+ urls = [url + ('coco2017labels-segments.zip' if segments else 'coco2017labels.zip')] # labels
39
+ download(urls, dir=dir.parent)
40
+
41
+ # Download data
42
+ urls = ['http://images.cocodataset.org/zips/train2017.zip', # 19G, 118k images
43
+ 'http://images.cocodataset.org/zips/val2017.zip', # 1G, 5k images
44
+ 'http://images.cocodataset.org/zips/test2017.zip'] # 7G, 41k images (optional)
45
+ download(urls, dir=dir / 'images', threads=3)
data/coco128.yaml ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) by Ultralytics
3
+ # Example usage: python train.py --data coco128.yaml
4
+ # parent
5
+ # ├── yolov5
6
+ # └── datasets
7
+ # └── coco128 ← downloads here (7 MB)
8
+
9
+
10
+ # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
11
+ path: ../datasets/coco128 # dataset root dir
12
+ train: images/train2017 # train images (relative to 'path') 128 images
13
+ val: images/train2017 # val images (relative to 'path') 128 images
14
+ test: # test images (optional)
15
+
16
+ # Classes
17
+ nc: 80 # number of classes
18
+ names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
19
+ 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
20
+ 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
21
+ 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
22
+ 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
23
+ 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
24
+ 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
25
+ 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
26
+ 'hair drier', 'toothbrush'] # class names
27
+
28
+
29
+ # Download script/URL (optional)
30
+ download: https://ultralytics.com/assets/coco128.zip
data/hyps/hyp.Objects365.yaml ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # Hyperparameters for Objects365 training
3
+ # python train.py --weights yolov5m.pt --data Objects365.yaml --evolve
4
+ # See Hyperparameter Evolution tutorial for details https://github.com/ultralytics/yolov5#tutorials
5
+
6
+ lr0: 0.00258
7
+ lrf: 0.17
8
+ momentum: 0.779
9
+ weight_decay: 0.00058
10
+ warmup_epochs: 1.33
11
+ warmup_momentum: 0.86
12
+ warmup_bias_lr: 0.0711
13
+ box: 0.0539
14
+ cls: 0.299
15
+ cls_pw: 0.825
16
+ obj: 0.632
17
+ obj_pw: 1.0
18
+ iou_t: 0.2
19
+ anchor_t: 3.44
20
+ anchors: 3.2
21
+ fl_gamma: 0.0
22
+ hsv_h: 0.0188
23
+ hsv_s: 0.704
24
+ hsv_v: 0.36
25
+ degrees: 0.0
26
+ translate: 0.0902
27
+ scale: 0.491
28
+ shear: 0.0
29
+ perspective: 0.0
30
+ flipud: 0.0
31
+ fliplr: 0.5
32
+ mosaic: 1.0
33
+ mixup: 0.0
34
+ copy_paste: 0.0
data/hyps/hyp.VOC.yaml ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # Hyperparameters for VOC training
3
+ # python train.py --batch 128 --weights yolov5m6.pt --data VOC.yaml --epochs 50 --img 512 --hyp hyp.scratch-med.yaml --evolve
4
+ # See Hyperparameter Evolution tutorial for details https://github.com/ultralytics/yolov5#tutorials
5
+
6
+ # YOLOv5 Hyperparameter Evolution Results
7
+ # Best generation: 467
8
+ # Last generation: 996
9
+ # metrics/precision, metrics/recall, metrics/mAP_0.5, metrics/mAP_0.5:0.95, val/box_loss, val/obj_loss, val/cls_loss
10
+ # 0.87729, 0.85125, 0.91286, 0.72664, 0.0076739, 0.0042529, 0.0013865
11
+
12
+ lr0: 0.00334
13
+ lrf: 0.15135
14
+ momentum: 0.74832
15
+ weight_decay: 0.00025
16
+ warmup_epochs: 3.3835
17
+ warmup_momentum: 0.59462
18
+ warmup_bias_lr: 0.18657
19
+ box: 0.02
20
+ cls: 0.21638
21
+ cls_pw: 0.5
22
+ obj: 0.51728
23
+ obj_pw: 0.67198
24
+ iou_t: 0.2
25
+ anchor_t: 3.3744
26
+ fl_gamma: 0.0
27
+ hsv_h: 0.01041
28
+ hsv_s: 0.54703
29
+ hsv_v: 0.27739
30
+ degrees: 0.0
31
+ translate: 0.04591
32
+ scale: 0.75544
33
+ shear: 0.0
34
+ perspective: 0.0
35
+ flipud: 0.0
36
+ fliplr: 0.5
37
+ mosaic: 0.85834
38
+ mixup: 0.04266
39
+ copy_paste: 0.0
40
+ anchors: 3.412
data/hyps/hyp.scratch-high.yaml ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # Hyperparameters for high-augmentation COCO training from scratch
3
+ # python train.py --batch 32 --cfg yolov5m6.yaml --weights '' --data coco.yaml --img 1280 --epochs 300
4
+ # See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials
5
+
6
+ lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
7
+ lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)
8
+ momentum: 0.937 # SGD momentum/Adam beta1
9
+ weight_decay: 0.0005 # optimizer weight decay 5e-4
10
+ warmup_epochs: 3.0 # warmup epochs (fractions ok)
11
+ warmup_momentum: 0.8 # warmup initial momentum
12
+ warmup_bias_lr: 0.1 # warmup initial bias lr
13
+ box: 0.05 # box loss gain
14
+ cls: 0.3 # cls loss gain
15
+ cls_pw: 1.0 # cls BCELoss positive_weight
16
+ obj: 0.7 # obj loss gain (scale with pixels)
17
+ obj_pw: 1.0 # obj BCELoss positive_weight
18
+ iou_t: 0.20 # IoU training threshold
19
+ anchor_t: 4.0 # anchor-multiple threshold
20
+ # anchors: 3 # anchors per output layer (0 to ignore)
21
+ fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
22
+ hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
23
+ hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
24
+ hsv_v: 0.4 # image HSV-Value augmentation (fraction)
25
+ degrees: 0.0 # image rotation (+/- deg)
26
+ translate: 0.1 # image translation (+/- fraction)
27
+ scale: 0.9 # image scale (+/- gain)
28
+ shear: 0.0 # image shear (+/- deg)
29
+ perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
30
+ flipud: 0.0 # image flip up-down (probability)
31
+ fliplr: 0.5 # image flip left-right (probability)
32
+ mosaic: 1.0 # image mosaic (probability)
33
+ mixup: 0.1 # image mixup (probability)
34
+ copy_paste: 0.1 # segment copy-paste (probability)
data/hyps/hyp.scratch-low.yaml ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # Hyperparameters for low-augmentation COCO training from scratch
3
+ # python train.py --batch 64 --cfg yolov5n6.yaml --weights '' --data coco.yaml --img 640 --epochs 300 --linear
4
+ # See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials
5
+
6
+ lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
7
+ lrf: 0.01 # final OneCycleLR learning rate (lr0 * lrf)
8
+ momentum: 0.937 # SGD momentum/Adam beta1
9
+ weight_decay: 0.0005 # optimizer weight decay 5e-4
10
+ warmup_epochs: 3.0 # warmup epochs (fractions ok)
11
+ warmup_momentum: 0.8 # warmup initial momentum
12
+ warmup_bias_lr: 0.1 # warmup initial bias lr
13
+ box: 0.05 # box loss gain
14
+ cls: 0.5 # cls loss gain
15
+ cls_pw: 1.0 # cls BCELoss positive_weight
16
+ obj: 1.0 # obj loss gain (scale with pixels)
17
+ obj_pw: 1.0 # obj BCELoss positive_weight
18
+ iou_t: 0.20 # IoU training threshold
19
+ anchor_t: 4.0 # anchor-multiple threshold
20
+ # anchors: 3 # anchors per output layer (0 to ignore)
21
+ fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
22
+ hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
23
+ hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
24
+ hsv_v: 0.4 # image HSV-Value augmentation (fraction)
25
+ degrees: 0.0 # image rotation (+/- deg)
26
+ translate: 0.1 # image translation (+/- fraction)
27
+ scale: 0.5 # image scale (+/- gain)
28
+ shear: 0.0 # image shear (+/- deg)
29
+ perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
30
+ flipud: 0.0 # image flip up-down (probability)
31
+ fliplr: 0.5 # image flip left-right (probability)
32
+ mosaic: 1.0 # image mosaic (probability)
33
+ mixup: 0.0 # image mixup (probability)
34
+ copy_paste: 0.0 # segment copy-paste (probability)
data/hyps/hyp.scratch-med.yaml ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # Hyperparameters for medium-augmentation COCO training from scratch
3
+ # python train.py --batch 32 --cfg yolov5m6.yaml --weights '' --data coco.yaml --img 1280 --epochs 300
4
+ # See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials
5
+
6
+ lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
7
+ lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)
8
+ momentum: 0.937 # SGD momentum/Adam beta1
9
+ weight_decay: 0.0005 # optimizer weight decay 5e-4
10
+ warmup_epochs: 3.0 # warmup epochs (fractions ok)
11
+ warmup_momentum: 0.8 # warmup initial momentum
12
+ warmup_bias_lr: 0.1 # warmup initial bias lr
13
+ box: 0.05 # box loss gain
14
+ cls: 0.3 # cls loss gain
15
+ cls_pw: 1.0 # cls BCELoss positive_weight
16
+ obj: 0.7 # obj loss gain (scale with pixels)
17
+ obj_pw: 1.0 # obj BCELoss positive_weight
18
+ iou_t: 0.20 # IoU training threshold
19
+ anchor_t: 4.0 # anchor-multiple threshold
20
+ # anchors: 3 # anchors per output layer (0 to ignore)
21
+ fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
22
+ hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
23
+ hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
24
+ hsv_v: 0.4 # image HSV-Value augmentation (fraction)
25
+ degrees: 0.0 # image rotation (+/- deg)
26
+ translate: 0.1 # image translation (+/- fraction)
27
+ scale: 0.9 # image scale (+/- gain)
28
+ shear: 0.0 # image shear (+/- deg)
29
+ perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
30
+ flipud: 0.0 # image flip up-down (probability)
31
+ fliplr: 0.5 # image flip left-right (probability)
32
+ mosaic: 1.0 # image mosaic (probability)
33
+ mixup: 0.1 # image mixup (probability)
34
+ copy_paste: 0.0 # segment copy-paste (probability)
data/images/bus.jpg ADDED
data/images/zidane.jpg ADDED
data/scripts/download_weights.sh ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
3
+ # Download latest models from https://github.com/ultralytics/yolov5/releases
4
+ # Example usage: bash path/to/download_weights.sh
5
+ # parent
6
+ # └── yolov5
7
+ # ├── yolov5s.pt ← downloads here
8
+ # ├── yolov5m.pt
9
+ # └── ...
10
+
11
+ python - <<EOF
12
+ from utils.downloads import attempt_download
13
+
14
+ models = ['n', 's', 'm', 'l', 'x']
15
+ models.extend([x + '6' for x in models]) # add P6 models
16
+
17
+ for x in models:
18
+ attempt_download(f'yolov5{x}.pt')
19
+
20
+ EOF
data/scripts/get_coco.sh ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
3
+ # Download COCO 2017 dataset http://cocodataset.org
4
+ # Example usage: bash data/scripts/get_coco.sh
5
+ # parent
6
+ # ├── yolov5
7
+ # └── datasets
8
+ # └── coco ← downloads here
9
+
10
+ # Download/unzip labels
11
+ d='../datasets' # unzip directory
12
+ url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
13
+ f='coco2017labels.zip' # or 'coco2017labels-segments.zip', 68 MB
14
+ echo 'Downloading' $url$f ' ...'
15
+ curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
16
+
17
+ # Download/unzip images
18
+ d='../datasets/coco/images' # unzip directory
19
+ url=http://images.cocodataset.org/zips/
20
+ f1='train2017.zip' # 19G, 118k images
21
+ f2='val2017.zip' # 1G, 5k images
22
+ f3='test2017.zip' # 7G, 41k images (optional)
23
+ for f in $f1 $f2; do
24
+ echo 'Downloading' $url$f '...'
25
+ curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
26
+ done
27
+ wait # finish background tasks
data/scripts/get_coco128.sh ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
3
+ # Download COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017)
4
+ # Example usage: bash data/scripts/get_coco128.sh
5
+ # parent
6
+ # ├── yolov5
7
+ # └── datasets
8
+ # └── coco128 ← downloads here
9
+
10
+ # Download/unzip images and labels
11
+ d='../datasets' # unzip directory
12
+ url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
13
+ f='coco128.zip' # or 'coco128-segments.zip', 68 MB
14
+ echo 'Downloading' $url$f ' ...'
15
+ curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
16
+
17
+ wait # finish background tasks
data/xView.yaml ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # DIUx xView 2018 Challenge https://challenge.xviewdataset.org by U.S. National Geospatial-Intelligence Agency (NGA)
3
+ # -------- DOWNLOAD DATA MANUALLY and jar xf val_images.zip to 'datasets/xView' before running train command! --------
4
+ # Example usage: python train.py --data xView.yaml
5
+ # parent
6
+ # ├── yolov5
7
+ # └── datasets
8
+ # └── xView ← downloads here (20.7 GB)
9
+
10
+
11
+ # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
12
+ path: ../datasets/xView # dataset root dir
13
+ train: images/autosplit_train.txt # train images (relative to 'path') 90% of 847 train images
14
+ val: images/autosplit_val.txt # train images (relative to 'path') 10% of 847 train images
15
+
16
+ # Classes
17
+ nc: 60 # number of classes
18
+ names: ['Fixed-wing Aircraft', 'Small Aircraft', 'Cargo Plane', 'Helicopter', 'Passenger Vehicle', 'Small Car', 'Bus',
19
+ 'Pickup Truck', 'Utility Truck', 'Truck', 'Cargo Truck', 'Truck w/Box', 'Truck Tractor', 'Trailer',
20
+ 'Truck w/Flatbed', 'Truck w/Liquid', 'Crane Truck', 'Railway Vehicle', 'Passenger Car', 'Cargo Car',
21
+ 'Flat Car', 'Tank car', 'Locomotive', 'Maritime Vessel', 'Motorboat', 'Sailboat', 'Tugboat', 'Barge',
22
+ 'Fishing Vessel', 'Ferry', 'Yacht', 'Container Ship', 'Oil Tanker', 'Engineering Vehicle', 'Tower crane',
23
+ 'Container Crane', 'Reach Stacker', 'Straddle Carrier', 'Mobile Crane', 'Dump Truck', 'Haul Truck',
24
+ 'Scraper/Tractor', 'Front loader/Bulldozer', 'Excavator', 'Cement Mixer', 'Ground Grader', 'Hut/Tent', 'Shed',
25
+ 'Building', 'Aircraft Hangar', 'Damaged Building', 'Facility', 'Construction Site', 'Vehicle Lot', 'Helipad',
26
+ 'Storage Tank', 'Shipping container lot', 'Shipping Container', 'Pylon', 'Tower'] # class names
27
+
28
+
29
+ # Download script/URL (optional) ---------------------------------------------------------------------------------------
30
+ download: |
31
+ import json
32
+ import os
33
+ from pathlib import Path
34
+
35
+ import numpy as np
36
+ from PIL import Image
37
+ from tqdm import tqdm
38
+
39
+ from utils.datasets import autosplit
40
+ from utils.general import download, xyxy2xywhn
41
+
42
+
43
+ def convert_labels(fname=Path('xView/xView_train.geojson')):
44
+ # Convert xView geoJSON labels to YOLO format
45
+ path = fname.parent
46
+ with open(fname) as f:
47
+ print(f'Loading {fname}...')
48
+ data = json.load(f)
49
+
50
+ # Make dirs
51
+ labels = Path(path / 'labels' / 'train')
52
+ os.system(f'rm -rf {labels}')
53
+ labels.mkdir(parents=True, exist_ok=True)
54
+
55
+ # xView classes 11-94 to 0-59
56
+ xview_class2index = [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 1, 2, -1, 3, -1, 4, 5, 6, 7, 8, -1, 9, 10, 11,
57
+ 12, 13, 14, 15, -1, -1, 16, 17, 18, 19, 20, 21, 22, -1, 23, 24, 25, -1, 26, 27, -1, 28, -1,
58
+ 29, 30, 31, 32, 33, 34, 35, 36, 37, -1, 38, 39, 40, 41, 42, 43, 44, 45, -1, -1, -1, -1, 46,
59
+ 47, 48, 49, -1, 50, 51, -1, 52, -1, -1, -1, 53, 54, -1, 55, -1, -1, 56, -1, 57, -1, 58, 59]
60
+
61
+ shapes = {}
62
+ for feature in tqdm(data['features'], desc=f'Converting {fname}'):
63
+ p = feature['properties']
64
+ if p['bounds_imcoords']:
65
+ id = p['image_id']
66
+ file = path / 'train_images' / id
67
+ if file.exists(): # 1395.tif missing
68
+ try:
69
+ box = np.array([int(num) for num in p['bounds_imcoords'].split(",")])
70
+ assert box.shape[0] == 4, f'incorrect box shape {box.shape[0]}'
71
+ cls = p['type_id']
72
+ cls = xview_class2index[int(cls)] # xView class to 0-60
73
+ assert 59 >= cls >= 0, f'incorrect class index {cls}'
74
+
75
+ # Write YOLO label
76
+ if id not in shapes:
77
+ shapes[id] = Image.open(file).size
78
+ box = xyxy2xywhn(box[None].astype(np.float), w=shapes[id][0], h=shapes[id][1], clip=True)
79
+ with open((labels / id).with_suffix('.txt'), 'a') as f:
80
+ f.write(f"{cls} {' '.join(f'{x:.6f}' for x in box[0])}\n") # write label.txt
81
+ except Exception as e:
82
+ print(f'WARNING: skipping one label for {file}: {e}')
83
+
84
+
85
+ # Download manually from https://challenge.xviewdataset.org
86
+ dir = Path(yaml['path']) # dataset root dir
87
+ # urls = ['https://d307kc0mrhucc3.cloudfront.net/train_labels.zip', # train labels
88
+ # 'https://d307kc0mrhucc3.cloudfront.net/train_images.zip', # 15G, 847 train images
89
+ # 'https://d307kc0mrhucc3.cloudfront.net/val_images.zip'] # 5G, 282 val images (no labels)
90
+ # download(urls, dir=dir, delete=False)
91
+
92
+ # Convert labels
93
+ convert_labels(dir / 'xView_train.geojson')
94
+
95
+ # Move images
96
+ images = Path(dir / 'images')
97
+ images.mkdir(parents=True, exist_ok=True)
98
+ Path(dir / 'train_images').rename(dir / 'images' / 'train')
99
+ Path(dir / 'val_images').rename(dir / 'images' / 'val')
100
+
101
+ # Split
102
+ autosplit(dir / 'images' / 'train')
detect.py ADDED
@@ -0,0 +1,256 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ """
3
+ Run inference on images, videos, directories, streams, etc.
4
+
5
+ Usage - sources:
6
+ $ python path/to/detect.py --weights yolov5s.pt --source 0 # webcam
7
+ img.jpg # image
8
+ vid.mp4 # video
9
+ path/ # directory
10
+ path/*.jpg # glob
11
+ 'https://youtu.be/Zgi9g1ksQHc' # YouTube
12
+ 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
13
+
14
+ Usage - formats:
15
+ $ python path/to/detect.py --weights yolov5s.pt # PyTorch
16
+ yolov5s.torchscript # TorchScript
17
+ yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn
18
+ yolov5s.xml # OpenVINO
19
+ yolov5s.engine # TensorRT
20
+ yolov5s.mlmodel # CoreML (macOS-only)
21
+ yolov5s_saved_model # TensorFlow SavedModel
22
+ yolov5s.pb # TensorFlow GraphDef
23
+ yolov5s.tflite # TensorFlow Lite
24
+ yolov5s_edgetpu.tflite # TensorFlow Edge TPU
25
+ """
26
+
27
+ import argparse
28
+ import os
29
+ import sys
30
+ from pathlib import Path
31
+
32
+ import torch
33
+ import torch.backends.cudnn as cudnn
34
+
35
+ FILE = Path(__file__).resolve()
36
+ ROOT = FILE.parents[0] # YOLOv5 root directory
37
+ if str(ROOT) not in sys.path:
38
+ sys.path.append(str(ROOT)) # add ROOT to PATH
39
+ ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
40
+
41
+ from models.common import DetectMultiBackend
42
+ from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadStreams
43
+ from utils.general import (LOGGER, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2,
44
+ increment_path, non_max_suppression, print_args, scale_coords, strip_optimizer, xyxy2xywh)
45
+ from utils.plots import Annotator, colors, save_one_box
46
+ from utils.torch_utils import select_device, time_sync
47
+
48
+
49
+ @torch.no_grad()
50
+ def run(
51
+ weights=ROOT / 'yolov5s.pt', # model.pt path(s)
52
+ source=ROOT / 'data/images', # file/dir/URL/glob, 0 for webcam
53
+ data=ROOT / 'data/coco128.yaml', # dataset.yaml path
54
+ imgsz=(640, 640), # inference size (height, width)
55
+ conf_thres=0.25, # confidence threshold
56
+ iou_thres=0.45, # NMS IOU threshold
57
+ max_det=1000, # maximum detections per image
58
+ device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
59
+ view_img=False, # show results
60
+ save_txt=False, # save results to *.txt
61
+ save_conf=False, # save confidences in --save-txt labels
62
+ save_crop=False, # save cropped prediction boxes
63
+ nosave=False, # do not save images/videos
64
+ classes=None, # filter by class: --class 0, or --class 0 2 3
65
+ agnostic_nms=False, # class-agnostic NMS
66
+ augment=False, # augmented inference
67
+ visualize=False, # visualize features
68
+ update=False, # update all models
69
+ project=ROOT / 'runs/detect', # save results to project/name
70
+ name='exp', # save results to project/name
71
+ exist_ok=False, # existing project/name ok, do not increment
72
+ line_thickness=3, # bounding box thickness (pixels)
73
+ hide_labels=False, # hide labels
74
+ hide_conf=False, # hide confidences
75
+ half=False, # use FP16 half-precision inference
76
+ dnn=False, # use OpenCV DNN for ONNX inference
77
+ ):
78
+ source = str(source)
79
+ save_img = not nosave and not source.endswith('.txt') # save inference images
80
+ is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
81
+ is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://'))
82
+ webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file)
83
+ if is_url and is_file:
84
+ source = check_file(source) # download
85
+
86
+ # Directories
87
+ save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run
88
+ (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
89
+
90
+ # Load model
91
+ device = select_device(device)
92
+ model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
93
+ stride, names, pt = model.stride, model.names, model.pt
94
+ imgsz = check_img_size(imgsz, s=stride) # check image size
95
+
96
+ # Dataloader
97
+ if webcam:
98
+ view_img = check_imshow()
99
+ cudnn.benchmark = True # set True to speed up constant image size inference
100
+ dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt)
101
+ bs = len(dataset) # batch_size
102
+ else:
103
+ dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt)
104
+ bs = 1 # batch_size
105
+ vid_path, vid_writer = [None] * bs, [None] * bs
106
+
107
+ # Run inference
108
+ model.warmup(imgsz=(1 if pt else bs, 3, *imgsz)) # warmup
109
+ seen, windows, dt = 0, [], [0.0, 0.0, 0.0]
110
+ for path, im, im0s, vid_cap, s in dataset:
111
+ t1 = time_sync()
112
+ im = torch.from_numpy(im).to(device)
113
+ im = im.half() if model.fp16 else im.float() # uint8 to fp16/32
114
+ im /= 255 # 0 - 255 to 0.0 - 1.0
115
+ if len(im.shape) == 3:
116
+ im = im[None] # expand for batch dim
117
+ t2 = time_sync()
118
+ dt[0] += t2 - t1
119
+
120
+ # Inference
121
+ visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
122
+ pred = model(im, augment=augment, visualize=visualize)
123
+ t3 = time_sync()
124
+ dt[1] += t3 - t2
125
+
126
+ # NMS
127
+ pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
128
+ dt[2] += time_sync() - t3
129
+
130
+ # Second-stage classifier (optional)
131
+ # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)
132
+
133
+ # Process predictions
134
+ for i, det in enumerate(pred): # per image
135
+ seen += 1
136
+ if webcam: # batch_size >= 1
137
+ p, im0, frame = path[i], im0s[i].copy(), dataset.count
138
+ s += f'{i}: '
139
+ else:
140
+ p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0)
141
+
142
+ p = Path(p) # to Path
143
+ save_path = str(save_dir / p.name) # im.jpg
144
+ txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # im.txt
145
+ s += '%gx%g ' % im.shape[2:] # print string
146
+ gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
147
+ imc = im0.copy() if save_crop else im0 # for save_crop
148
+ annotator = Annotator(im0, line_width=line_thickness, example=str(names))
149
+ if len(det):
150
+ # Rescale boxes from img_size to im0 size
151
+ det[:, :4] = scale_coords(im.shape[2:], det[:, :4], im0.shape).round()
152
+
153
+ # Print results
154
+ for c in det[:, -1].unique():
155
+ n = (det[:, -1] == c).sum() # detections per class
156
+ s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
157
+
158
+ # Write results
159
+ for *xyxy, conf, cls in reversed(det):
160
+ if save_txt: # Write to file
161
+ xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
162
+ line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
163
+ with open(f'{txt_path}.txt', 'a') as f:
164
+ f.write(('%g ' * len(line)).rstrip() % line + '\n')
165
+
166
+ if save_img or save_crop or view_img: # Add bbox to image
167
+ c = int(cls) # integer class
168
+ label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')
169
+ annotator.box_label(xyxy, label, color=colors(c, True))
170
+ if save_crop:
171
+ save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)
172
+
173
+ # Stream results
174
+ im0 = annotator.result()
175
+ if view_img:
176
+ if p not in windows:
177
+ windows.append(p)
178
+ cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux)
179
+ cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0])
180
+ cv2.imshow(str(p), im0)
181
+ cv2.waitKey(1) # 1 millisecond
182
+
183
+ # Save results (image with detections)
184
+ if save_img:
185
+ if dataset.mode == 'image':
186
+ cv2.imwrite(save_path, im0)
187
+ else: # 'video' or 'stream'
188
+ if vid_path[i] != save_path: # new video
189
+ vid_path[i] = save_path
190
+ if isinstance(vid_writer[i], cv2.VideoWriter):
191
+ vid_writer[i].release() # release previous video writer
192
+ if vid_cap: # video
193
+ fps = vid_cap.get(cv2.CAP_PROP_FPS)
194
+ w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
195
+ h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
196
+ else: # stream
197
+ fps, w, h = 30, im0.shape[1], im0.shape[0]
198
+ save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos
199
+ vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
200
+ vid_writer[i].write(im0)
201
+
202
+ # Print time (inference-only)
203
+ LOGGER.info(f'{s}Done. ({t3 - t2:.3f}s)')
204
+
205
+ # Print results
206
+ t = tuple(x / seen * 1E3 for x in dt) # speeds per image
207
+ LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t)
208
+ if save_txt or save_img:
209
+ s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
210
+ LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")
211
+ if update:
212
+ strip_optimizer(weights) # update model (to fix SourceChangeWarning)
213
+
214
+
215
+ def parse_opt():
216
+ parser = argparse.ArgumentParser()
217
+ parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path(s)')
218
+ parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob, 0 for webcam')
219
+ parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path')
220
+ parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
221
+ parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold')
222
+ parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold')
223
+ parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image')
224
+ parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
225
+ parser.add_argument('--view-img', action='store_true', help='show results')
226
+ parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
227
+ parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
228
+ parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes')
229
+ parser.add_argument('--nosave', action='store_true', help='do not save images/videos')
230
+ parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3')
231
+ parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
232
+ parser.add_argument('--augment', action='store_true', help='augmented inference')
233
+ parser.add_argument('--visualize', action='store_true', help='visualize features')
234
+ parser.add_argument('--update', action='store_true', help='update all models')
235
+ parser.add_argument('--project', default=ROOT / 'runs/detect', help='save results to project/name')
236
+ parser.add_argument('--name', default='exp', help='save results to project/name')
237
+ parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
238
+ parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)')
239
+ parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels')
240
+ parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences')
241
+ parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
242
+ parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
243
+ opt = parser.parse_args()
244
+ opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
245
+ print_args(vars(opt))
246
+ return opt
247
+
248
+
249
+ def main(opt):
250
+ check_requirements(exclude=('tensorboard', 'thop'))
251
+ run(**vars(opt))
252
+
253
+
254
+ if __name__ == "__main__":
255
+ opt = parse_opt()
256
+ main(opt)
export.py ADDED
@@ -0,0 +1,610 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ """
3
+ Export a YOLOv5 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit
4
+
5
+ Format | `export.py --include` | Model
6
+ --- | --- | ---
7
+ PyTorch | - | yolov5s.pt
8
+ TorchScript | `torchscript` | yolov5s.torchscript
9
+ ONNX | `onnx` | yolov5s.onnx
10
+ OpenVINO | `openvino` | yolov5s_openvino_model/
11
+ TensorRT | `engine` | yolov5s.engine
12
+ CoreML | `coreml` | yolov5s.mlmodel
13
+ TensorFlow SavedModel | `saved_model` | yolov5s_saved_model/
14
+ TensorFlow GraphDef | `pb` | yolov5s.pb
15
+ TensorFlow Lite | `tflite` | yolov5s.tflite
16
+ TensorFlow Edge TPU | `edgetpu` | yolov5s_edgetpu.tflite
17
+ TensorFlow.js | `tfjs` | yolov5s_web_model/
18
+
19
+ Requirements:
20
+ $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime openvino-dev tensorflow-cpu # CPU
21
+ $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime-gpu openvino-dev tensorflow # GPU
22
+
23
+ Usage:
24
+ $ python path/to/export.py --weights yolov5s.pt --include torchscript onnx openvino engine coreml tflite ...
25
+
26
+ Inference:
27
+ $ python path/to/detect.py --weights yolov5s.pt # PyTorch
28
+ yolov5s.torchscript # TorchScript
29
+ yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn
30
+ yolov5s.xml # OpenVINO
31
+ yolov5s.engine # TensorRT
32
+ yolov5s.mlmodel # CoreML (macOS-only)
33
+ yolov5s_saved_model # TensorFlow SavedModel
34
+ yolov5s.pb # TensorFlow GraphDef
35
+ yolov5s.tflite # TensorFlow Lite
36
+ yolov5s_edgetpu.tflite # TensorFlow Edge TPU
37
+
38
+ TensorFlow.js:
39
+ $ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example
40
+ $ npm install
41
+ $ ln -s ../../yolov5/yolov5s_web_model public/yolov5s_web_model
42
+ $ npm start
43
+ """
44
+
45
+ import argparse
46
+ import json
47
+ import os
48
+ import platform
49
+ import subprocess
50
+ import sys
51
+ import time
52
+ import warnings
53
+ from pathlib import Path
54
+
55
+ import pandas as pd
56
+ import torch
57
+ import yaml
58
+ from torch.utils.mobile_optimizer import optimize_for_mobile
59
+
60
+ FILE = Path(__file__).resolve()
61
+ ROOT = FILE.parents[0] # YOLOv5 root directory
62
+ if str(ROOT) not in sys.path:
63
+ sys.path.append(str(ROOT)) # add ROOT to PATH
64
+ if platform.system() != 'Windows':
65
+ ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
66
+
67
+ from models.experimental import attempt_load
68
+ from models.yolo import Detect
69
+ from utils.dataloaders import LoadImages
70
+ from utils.general import (LOGGER, check_dataset, check_img_size, check_requirements, check_version, colorstr,
71
+ file_size, print_args, url2file)
72
+ from utils.torch_utils import select_device
73
+
74
+
75
+ def export_formats():
76
+ # YOLOv5 export formats
77
+ x = [
78
+ ['PyTorch', '-', '.pt', True, True],
79
+ ['TorchScript', 'torchscript', '.torchscript', True, True],
80
+ ['ONNX', 'onnx', '.onnx', True, True],
81
+ ['OpenVINO', 'openvino', '_openvino_model', True, False],
82
+ ['TensorRT', 'engine', '.engine', False, True],
83
+ ['CoreML', 'coreml', '.mlmodel', True, False],
84
+ ['TensorFlow SavedModel', 'saved_model', '_saved_model', True, True],
85
+ ['TensorFlow GraphDef', 'pb', '.pb', True, True],
86
+ ['TensorFlow Lite', 'tflite', '.tflite', True, False],
87
+ ['TensorFlow Edge TPU', 'edgetpu', '_edgetpu.tflite', False, False],
88
+ ['TensorFlow.js', 'tfjs', '_web_model', False, False],]
89
+ return pd.DataFrame(x, columns=['Format', 'Argument', 'Suffix', 'CPU', 'GPU'])
90
+
91
+
92
+ def export_torchscript(model, im, file, optimize, prefix=colorstr('TorchScript:')):
93
+ # YOLOv5 TorchScript model export
94
+ try:
95
+ LOGGER.info(f'\n{prefix} starting export with torch {torch.__version__}...')
96
+ f = file.with_suffix('.torchscript')
97
+
98
+ ts = torch.jit.trace(model, im, strict=False)
99
+ d = {"shape": im.shape, "stride": int(max(model.stride)), "names": model.names}
100
+ extra_files = {'config.txt': json.dumps(d)} # torch._C.ExtraFilesMap()
101
+ if optimize: # https://pytorch.org/tutorials/recipes/mobile_interpreter.html
102
+ optimize_for_mobile(ts)._save_for_lite_interpreter(str(f), _extra_files=extra_files)
103
+ else:
104
+ ts.save(str(f), _extra_files=extra_files)
105
+
106
+ LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
107
+ return f
108
+ except Exception as e:
109
+ LOGGER.info(f'{prefix} export failure: {e}')
110
+
111
+
112
+ def export_onnx(model, im, file, opset, train, dynamic, simplify, prefix=colorstr('ONNX:')):
113
+ # YOLOv5 ONNX export
114
+ try:
115
+ check_requirements(('onnx',))
116
+ import onnx
117
+
118
+ LOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__}...')
119
+ f = file.with_suffix('.onnx')
120
+
121
+ torch.onnx.export(
122
+ model.cpu() if dynamic else model, # --dynamic only compatible with cpu
123
+ im.cpu() if dynamic else im,
124
+ f,
125
+ verbose=False,
126
+ opset_version=opset,
127
+ training=torch.onnx.TrainingMode.TRAINING if train else torch.onnx.TrainingMode.EVAL,
128
+ do_constant_folding=not train,
129
+ input_names=['images'],
130
+ output_names=['output'],
131
+ dynamic_axes={
132
+ 'images': {
133
+ 0: 'batch',
134
+ 2: 'height',
135
+ 3: 'width'}, # shape(1,3,640,640)
136
+ 'output': {
137
+ 0: 'batch',
138
+ 1: 'anchors'} # shape(1,25200,85)
139
+ } if dynamic else None)
140
+
141
+ # Checks
142
+ model_onnx = onnx.load(f) # load onnx model
143
+ onnx.checker.check_model(model_onnx) # check onnx model
144
+
145
+ # Metadata
146
+ d = {'stride': int(max(model.stride)), 'names': model.names}
147
+ for k, v in d.items():
148
+ meta = model_onnx.metadata_props.add()
149
+ meta.key, meta.value = k, str(v)
150
+ onnx.save(model_onnx, f)
151
+
152
+ # Simplify
153
+ if simplify:
154
+ try:
155
+ check_requirements(('onnx-simplifier',))
156
+ import onnxsim
157
+
158
+ LOGGER.info(f'{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...')
159
+ model_onnx, check = onnxsim.simplify(model_onnx,
160
+ dynamic_input_shape=dynamic,
161
+ input_shapes={'images': list(im.shape)} if dynamic else None)
162
+ assert check, 'assert check failed'
163
+ onnx.save(model_onnx, f)
164
+ except Exception as e:
165
+ LOGGER.info(f'{prefix} simplifier failure: {e}')
166
+ LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
167
+ return f
168
+ except Exception as e:
169
+ LOGGER.info(f'{prefix} export failure: {e}')
170
+
171
+
172
+ def export_openvino(model, file, half, prefix=colorstr('OpenVINO:')):
173
+ # YOLOv5 OpenVINO export
174
+ try:
175
+ check_requirements(('openvino-dev',)) # requires openvino-dev: https://pypi.org/project/openvino-dev/
176
+ import openvino.inference_engine as ie
177
+
178
+ LOGGER.info(f'\n{prefix} starting export with openvino {ie.__version__}...')
179
+ f = str(file).replace('.pt', f'_openvino_model{os.sep}')
180
+
181
+ cmd = f"mo --input_model {file.with_suffix('.onnx')} --output_dir {f} --data_type {'FP16' if half else 'FP32'}"
182
+ subprocess.check_output(cmd.split()) # export
183
+ with open(Path(f) / file.with_suffix('.yaml').name, 'w') as g:
184
+ yaml.dump({'stride': int(max(model.stride)), 'names': model.names}, g) # add metadata.yaml
185
+
186
+ LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
187
+ return f
188
+ except Exception as e:
189
+ LOGGER.info(f'\n{prefix} export failure: {e}')
190
+
191
+
192
+ def export_coreml(model, im, file, int8, half, prefix=colorstr('CoreML:')):
193
+ # YOLOv5 CoreML export
194
+ try:
195
+ check_requirements(('coremltools',))
196
+ import coremltools as ct
197
+
198
+ LOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...')
199
+ f = file.with_suffix('.mlmodel')
200
+
201
+ ts = torch.jit.trace(model, im, strict=False) # TorchScript model
202
+ ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])])
203
+ bits, mode = (8, 'kmeans_lut') if int8 else (16, 'linear') if half else (32, None)
204
+ if bits < 32:
205
+ if platform.system() == 'Darwin': # quantization only supported on macOS
206
+ with warnings.catch_warnings():
207
+ warnings.filterwarnings("ignore", category=DeprecationWarning) # suppress numpy==1.20 float warning
208
+ ct_model = ct.models.neural_network.quantization_utils.quantize_weights(ct_model, bits, mode)
209
+ else:
210
+ print(f'{prefix} quantization only supported on macOS, skipping...')
211
+ ct_model.save(f)
212
+
213
+ LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
214
+ return ct_model, f
215
+ except Exception as e:
216
+ LOGGER.info(f'\n{prefix} export failure: {e}')
217
+ return None, None
218
+
219
+
220
+ def export_engine(model, im, file, train, half, simplify, workspace=4, verbose=False, prefix=colorstr('TensorRT:')):
221
+ # YOLOv5 TensorRT export https://developer.nvidia.com/tensorrt
222
+ try:
223
+ assert im.device.type != 'cpu', 'export running on CPU but must be on GPU, i.e. `python export.py --device 0`'
224
+ try:
225
+ import tensorrt as trt
226
+ except Exception:
227
+ if platform.system() == 'Linux':
228
+ check_requirements(('nvidia-tensorrt',), cmds=('-U --index-url https://pypi.ngc.nvidia.com',))
229
+ import tensorrt as trt
230
+
231
+ if trt.__version__[0] == '7': # TensorRT 7 handling https://github.com/ultralytics/yolov5/issues/6012
232
+ grid = model.model[-1].anchor_grid
233
+ model.model[-1].anchor_grid = [a[..., :1, :1, :] for a in grid]
234
+ export_onnx(model, im, file, 12, train, False, simplify) # opset 12
235
+ model.model[-1].anchor_grid = grid
236
+ else: # TensorRT >= 8
237
+ check_version(trt.__version__, '8.0.0', hard=True) # require tensorrt>=8.0.0
238
+ export_onnx(model, im, file, 13, train, False, simplify) # opset 13
239
+ onnx = file.with_suffix('.onnx')
240
+
241
+ LOGGER.info(f'\n{prefix} starting export with TensorRT {trt.__version__}...')
242
+ assert onnx.exists(), f'failed to export ONNX file: {onnx}'
243
+ f = file.with_suffix('.engine') # TensorRT engine file
244
+ logger = trt.Logger(trt.Logger.INFO)
245
+ if verbose:
246
+ logger.min_severity = trt.Logger.Severity.VERBOSE
247
+
248
+ builder = trt.Builder(logger)
249
+ config = builder.create_builder_config()
250
+ config.max_workspace_size = workspace * 1 << 30
251
+ # config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace << 30) # fix TRT 8.4 deprecation notice
252
+
253
+ flag = (1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
254
+ network = builder.create_network(flag)
255
+ parser = trt.OnnxParser(network, logger)
256
+ if not parser.parse_from_file(str(onnx)):
257
+ raise RuntimeError(f'failed to load ONNX file: {onnx}')
258
+
259
+ inputs = [network.get_input(i) for i in range(network.num_inputs)]
260
+ outputs = [network.get_output(i) for i in range(network.num_outputs)]
261
+ LOGGER.info(f'{prefix} Network Description:')
262
+ for inp in inputs:
263
+ LOGGER.info(f'{prefix}\tinput "{inp.name}" with shape {inp.shape} and dtype {inp.dtype}')
264
+ for out in outputs:
265
+ LOGGER.info(f'{prefix}\toutput "{out.name}" with shape {out.shape} and dtype {out.dtype}')
266
+
267
+ LOGGER.info(f'{prefix} building FP{16 if builder.platform_has_fast_fp16 and half else 32} engine in {f}')
268
+ if builder.platform_has_fast_fp16 and half:
269
+ config.set_flag(trt.BuilderFlag.FP16)
270
+ with builder.build_engine(network, config) as engine, open(f, 'wb') as t:
271
+ t.write(engine.serialize())
272
+ LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
273
+ return f
274
+ except Exception as e:
275
+ LOGGER.info(f'\n{prefix} export failure: {e}')
276
+
277
+
278
+ def export_saved_model(model,
279
+ im,
280
+ file,
281
+ dynamic,
282
+ tf_nms=False,
283
+ agnostic_nms=False,
284
+ topk_per_class=100,
285
+ topk_all=100,
286
+ iou_thres=0.45,
287
+ conf_thres=0.25,
288
+ keras=False,
289
+ prefix=colorstr('TensorFlow SavedModel:')):
290
+ # YOLOv5 TensorFlow SavedModel export
291
+ try:
292
+ import tensorflow as tf
293
+ from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
294
+
295
+ from models.tf import TFDetect, TFModel
296
+
297
+ LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
298
+ f = str(file).replace('.pt', '_saved_model')
299
+ batch_size, ch, *imgsz = list(im.shape) # BCHW
300
+
301
+ tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz)
302
+ im = tf.zeros((batch_size, *imgsz, ch)) # BHWC order for TensorFlow
303
+ _ = tf_model.predict(im, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)
304
+ inputs = tf.keras.Input(shape=(*imgsz, ch), batch_size=None if dynamic else batch_size)
305
+ outputs = tf_model.predict(inputs, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)
306
+ keras_model = tf.keras.Model(inputs=inputs, outputs=outputs)
307
+ keras_model.trainable = False
308
+ keras_model.summary()
309
+ if keras:
310
+ keras_model.save(f, save_format='tf')
311
+ else:
312
+ spec = tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype)
313
+ m = tf.function(lambda x: keras_model(x)) # full model
314
+ m = m.get_concrete_function(spec)
315
+ frozen_func = convert_variables_to_constants_v2(m)
316
+ tfm = tf.Module()
317
+ tfm.__call__ = tf.function(lambda x: frozen_func(x)[:4] if tf_nms else frozen_func(x)[0], [spec])
318
+ tfm.__call__(im)
319
+ tf.saved_model.save(tfm,
320
+ f,
321
+ options=tf.saved_model.SaveOptions(experimental_custom_gradients=False)
322
+ if check_version(tf.__version__, '2.6') else tf.saved_model.SaveOptions())
323
+ LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
324
+ return keras_model, f
325
+ except Exception as e:
326
+ LOGGER.info(f'\n{prefix} export failure: {e}')
327
+ return None, None
328
+
329
+
330
+ def export_pb(keras_model, file, prefix=colorstr('TensorFlow GraphDef:')):
331
+ # YOLOv5 TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlow
332
+ try:
333
+ import tensorflow as tf
334
+ from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
335
+
336
+ LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
337
+ f = file.with_suffix('.pb')
338
+
339
+ m = tf.function(lambda x: keras_model(x)) # full model
340
+ m = m.get_concrete_function(tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype))
341
+ frozen_func = convert_variables_to_constants_v2(m)
342
+ frozen_func.graph.as_graph_def()
343
+ tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir=str(f.parent), name=f.name, as_text=False)
344
+
345
+ LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
346
+ return f
347
+ except Exception as e:
348
+ LOGGER.info(f'\n{prefix} export failure: {e}')
349
+
350
+
351
+ def export_tflite(keras_model, im, file, int8, data, nms, agnostic_nms, prefix=colorstr('TensorFlow Lite:')):
352
+ # YOLOv5 TensorFlow Lite export
353
+ try:
354
+ import tensorflow as tf
355
+
356
+ LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
357
+ batch_size, ch, *imgsz = list(im.shape) # BCHW
358
+ f = str(file).replace('.pt', '-fp16.tflite')
359
+
360
+ converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
361
+ converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
362
+ converter.target_spec.supported_types = [tf.float16]
363
+ converter.optimizations = [tf.lite.Optimize.DEFAULT]
364
+ if int8:
365
+ from models.tf import representative_dataset_gen
366
+ dataset = LoadImages(check_dataset(data)['train'], img_size=imgsz, auto=False) # representative data
367
+ converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib=100)
368
+ converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
369
+ converter.target_spec.supported_types = []
370
+ converter.inference_input_type = tf.uint8 # or tf.int8
371
+ converter.inference_output_type = tf.uint8 # or tf.int8
372
+ converter.experimental_new_quantizer = True
373
+ f = str(file).replace('.pt', '-int8.tflite')
374
+ if nms or agnostic_nms:
375
+ converter.target_spec.supported_ops.append(tf.lite.OpsSet.SELECT_TF_OPS)
376
+
377
+ tflite_model = converter.convert()
378
+ open(f, "wb").write(tflite_model)
379
+ LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
380
+ return f
381
+ except Exception as e:
382
+ LOGGER.info(f'\n{prefix} export failure: {e}')
383
+
384
+
385
+ def export_edgetpu(file, prefix=colorstr('Edge TPU:')):
386
+ # YOLOv5 Edge TPU export https://coral.ai/docs/edgetpu/models-intro/
387
+ try:
388
+ cmd = 'edgetpu_compiler --version'
389
+ help_url = 'https://coral.ai/docs/edgetpu/compiler/'
390
+ assert platform.system() == 'Linux', f'export only supported on Linux. See {help_url}'
391
+ if subprocess.run(f'{cmd} >/dev/null', shell=True).returncode != 0:
392
+ LOGGER.info(f'\n{prefix} export requires Edge TPU compiler. Attempting install from {help_url}')
393
+ sudo = subprocess.run('sudo --version >/dev/null', shell=True).returncode == 0 # sudo installed on system
394
+ for c in (
395
+ 'curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -',
396
+ 'echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list',
397
+ 'sudo apt-get update', 'sudo apt-get install edgetpu-compiler'):
398
+ subprocess.run(c if sudo else c.replace('sudo ', ''), shell=True, check=True)
399
+ ver = subprocess.run(cmd, shell=True, capture_output=True, check=True).stdout.decode().split()[-1]
400
+
401
+ LOGGER.info(f'\n{prefix} starting export with Edge TPU compiler {ver}...')
402
+ f = str(file).replace('.pt', '-int8_edgetpu.tflite') # Edge TPU model
403
+ f_tfl = str(file).replace('.pt', '-int8.tflite') # TFLite model
404
+
405
+ cmd = f"edgetpu_compiler -s -o {file.parent} {f_tfl}"
406
+ subprocess.run(cmd.split(), check=True)
407
+
408
+ LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
409
+ return f
410
+ except Exception as e:
411
+ LOGGER.info(f'\n{prefix} export failure: {e}')
412
+
413
+
414
+ def export_tfjs(file, prefix=colorstr('TensorFlow.js:')):
415
+ # YOLOv5 TensorFlow.js export
416
+ try:
417
+ check_requirements(('tensorflowjs',))
418
+ import re
419
+
420
+ import tensorflowjs as tfjs
421
+
422
+ LOGGER.info(f'\n{prefix} starting export with tensorflowjs {tfjs.__version__}...')
423
+ f = str(file).replace('.pt', '_web_model') # js dir
424
+ f_pb = file.with_suffix('.pb') # *.pb path
425
+ f_json = f'{f}/model.json' # *.json path
426
+
427
+ cmd = f'tensorflowjs_converter --input_format=tf_frozen_model ' \
428
+ f'--output_node_names=Identity,Identity_1,Identity_2,Identity_3 {f_pb} {f}'
429
+ subprocess.run(cmd.split())
430
+
431
+ with open(f_json) as j:
432
+ json = j.read()
433
+ with open(f_json, 'w') as j: # sort JSON Identity_* in ascending order
434
+ subst = re.sub(
435
+ r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, '
436
+ r'"Identity.?.?": {"name": "Identity.?.?"}, '
437
+ r'"Identity.?.?": {"name": "Identity.?.?"}, '
438
+ r'"Identity.?.?": {"name": "Identity.?.?"}}}', r'{"outputs": {"Identity": {"name": "Identity"}, '
439
+ r'"Identity_1": {"name": "Identity_1"}, '
440
+ r'"Identity_2": {"name": "Identity_2"}, '
441
+ r'"Identity_3": {"name": "Identity_3"}}}', json)
442
+ j.write(subst)
443
+
444
+ LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)')
445
+ return f
446
+ except Exception as e:
447
+ LOGGER.info(f'\n{prefix} export failure: {e}')
448
+
449
+
450
+ @torch.no_grad()
451
+ def run(
452
+ data=ROOT / 'data/coco128.yaml', # 'dataset.yaml path'
453
+ weights=ROOT / 'yolov5s.pt', # weights path
454
+ imgsz=(640, 640), # image (height, width)
455
+ batch_size=1, # batch size
456
+ device='cpu', # cuda device, i.e. 0 or 0,1,2,3 or cpu
457
+ include=('torchscript', 'onnx'), # include formats
458
+ half=False, # FP16 half-precision export
459
+ inplace=False, # set YOLOv5 Detect() inplace=True
460
+ train=False, # model.train() mode
461
+ keras=False, # use Keras
462
+ optimize=False, # TorchScript: optimize for mobile
463
+ int8=False, # CoreML/TF INT8 quantization
464
+ dynamic=False, # ONNX/TF: dynamic axes
465
+ simplify=False, # ONNX: simplify model
466
+ opset=12, # ONNX: opset version
467
+ verbose=False, # TensorRT: verbose log
468
+ workspace=4, # TensorRT: workspace size (GB)
469
+ nms=False, # TF: add NMS to model
470
+ agnostic_nms=False, # TF: add agnostic NMS to model
471
+ topk_per_class=100, # TF.js NMS: topk per class to keep
472
+ topk_all=100, # TF.js NMS: topk for all classes to keep
473
+ iou_thres=0.45, # TF.js NMS: IoU threshold
474
+ conf_thres=0.25, # TF.js NMS: confidence threshold
475
+ ):
476
+ t = time.time()
477
+ include = [x.lower() for x in include] # to lowercase
478
+ fmts = tuple(export_formats()['Argument'][1:]) # --include arguments
479
+ flags = [x in include for x in fmts]
480
+ assert sum(flags) == len(include), f'ERROR: Invalid --include {include}, valid --include arguments are {fmts}'
481
+ jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs = flags # export booleans
482
+ file = Path(url2file(weights) if str(weights).startswith(('http:/', 'https:/')) else weights) # PyTorch weights
483
+
484
+ # Load PyTorch model
485
+ device = select_device(device)
486
+ if half:
487
+ assert device.type != 'cpu' or coreml, '--half only compatible with GPU export, i.e. use --device 0'
488
+ assert not dynamic, '--half not compatible with --dynamic, i.e. use either --half or --dynamic but not both'
489
+ model = attempt_load(weights, device=device, inplace=True, fuse=True) # load FP32 model
490
+ nc, names = model.nc, model.names # number of classes, class names
491
+
492
+ # Checks
493
+ imgsz *= 2 if len(imgsz) == 1 else 1 # expand
494
+ assert nc == len(names), f'Model class count {nc} != len(names) {len(names)}'
495
+ if optimize:
496
+ assert device.type != 'cuda', '--optimize not compatible with cuda devices, i.e. use --device cpu'
497
+
498
+ # Input
499
+ gs = int(max(model.stride)) # grid size (max stride)
500
+ imgsz = [check_img_size(x, gs) for x in imgsz] # verify img_size are gs-multiples
501
+ im = torch.zeros(batch_size, 3, *imgsz).to(device) # image size(1,3,320,192) BCHW iDetection
502
+
503
+ # Update model
504
+ model.train() if train else model.eval() # training mode = no Detect() layer grid construction
505
+ for k, m in model.named_modules():
506
+ if isinstance(m, Detect):
507
+ m.inplace = inplace
508
+ m.onnx_dynamic = dynamic
509
+ m.export = True
510
+
511
+ for _ in range(2):
512
+ y = model(im) # dry runs
513
+ if half and not coreml:
514
+ im, model = im.half(), model.half() # to FP16
515
+ shape = tuple(y[0].shape) # model output shape
516
+ LOGGER.info(f"\n{colorstr('PyTorch:')} starting from {file} with output shape {shape} ({file_size(file):.1f} MB)")
517
+
518
+ # Exports
519
+ f = [''] * 10 # exported filenames
520
+ warnings.filterwarnings(action='ignore', category=torch.jit.TracerWarning) # suppress TracerWarning
521
+ if jit:
522
+ f[0] = export_torchscript(model, im, file, optimize)
523
+ if engine: # TensorRT required before ONNX
524
+ f[1] = export_engine(model, im, file, train, half, simplify, workspace, verbose)
525
+ if onnx or xml: # OpenVINO requires ONNX
526
+ f[2] = export_onnx(model, im, file, opset, train, dynamic, simplify)
527
+ if xml: # OpenVINO
528
+ f[3] = export_openvino(model, file, half)
529
+ if coreml:
530
+ _, f[4] = export_coreml(model, im, file, int8, half)
531
+
532
+ # TensorFlow Exports
533
+ if any((saved_model, pb, tflite, edgetpu, tfjs)):
534
+ if int8 or edgetpu: # TFLite --int8 bug https://github.com/ultralytics/yolov5/issues/5707
535
+ check_requirements(('flatbuffers==1.12',)) # required before `import tensorflow`
536
+ assert not tflite or not tfjs, 'TFLite and TF.js models must be exported separately, please pass only one type.'
537
+ model, f[5] = export_saved_model(model.cpu(),
538
+ im,
539
+ file,
540
+ dynamic,
541
+ tf_nms=nms or agnostic_nms or tfjs,
542
+ agnostic_nms=agnostic_nms or tfjs,
543
+ topk_per_class=topk_per_class,
544
+ topk_all=topk_all,
545
+ iou_thres=iou_thres,
546
+ conf_thres=conf_thres,
547
+ keras=keras)
548
+ if pb or tfjs: # pb prerequisite to tfjs
549
+ f[6] = export_pb(model, file)
550
+ if tflite or edgetpu:
551
+ f[7] = export_tflite(model, im, file, int8=int8 or edgetpu, data=data, nms=nms, agnostic_nms=agnostic_nms)
552
+ if edgetpu:
553
+ f[8] = export_edgetpu(file)
554
+ if tfjs:
555
+ f[9] = export_tfjs(file)
556
+
557
+ # Finish
558
+ f = [str(x) for x in f if x] # filter out '' and None
559
+ if any(f):
560
+ h = '--half' if half else '' # --half FP16 inference arg
561
+ LOGGER.info(f'\nExport complete ({time.time() - t:.2f}s)'
562
+ f"\nResults saved to {colorstr('bold', file.parent.resolve())}"
563
+ f"\nDetect: python detect.py --weights {f[-1]} {h}"
564
+ f"\nValidate: python val.py --weights {f[-1]} {h}"
565
+ f"\nPyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', '{f[-1]}')"
566
+ f"\nVisualize: https://netron.app")
567
+ return f # return list of exported files/dirs
568
+
569
+
570
+ def parse_opt():
571
+ parser = argparse.ArgumentParser()
572
+ parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
573
+ parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model.pt path(s)')
574
+ parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640, 640], help='image (h, w)')
575
+ parser.add_argument('--batch-size', type=int, default=1, help='batch size')
576
+ parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
577
+ parser.add_argument('--half', action='store_true', help='FP16 half-precision export')
578
+ parser.add_argument('--inplace', action='store_true', help='set YOLOv5 Detect() inplace=True')
579
+ parser.add_argument('--train', action='store_true', help='model.train() mode')
580
+ parser.add_argument('--keras', action='store_true', help='TF: use Keras')
581
+ parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile')
582
+ parser.add_argument('--int8', action='store_true', help='CoreML/TF INT8 quantization')
583
+ parser.add_argument('--dynamic', action='store_true', help='ONNX/TF: dynamic axes')
584
+ parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model')
585
+ parser.add_argument('--opset', type=int, default=12, help='ONNX: opset version')
586
+ parser.add_argument('--verbose', action='store_true', help='TensorRT: verbose log')
587
+ parser.add_argument('--workspace', type=int, default=4, help='TensorRT: workspace size (GB)')
588
+ parser.add_argument('--nms', action='store_true', help='TF: add NMS to model')
589
+ parser.add_argument('--agnostic-nms', action='store_true', help='TF: add agnostic NMS to model')
590
+ parser.add_argument('--topk-per-class', type=int, default=100, help='TF.js NMS: topk per class to keep')
591
+ parser.add_argument('--topk-all', type=int, default=100, help='TF.js NMS: topk for all classes to keep')
592
+ parser.add_argument('--iou-thres', type=float, default=0.45, help='TF.js NMS: IoU threshold')
593
+ parser.add_argument('--conf-thres', type=float, default=0.25, help='TF.js NMS: confidence threshold')
594
+ parser.add_argument('--include',
595
+ nargs='+',
596
+ default=['torchscript', 'onnx'],
597
+ help='torchscript, onnx, openvino, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs')
598
+ opt = parser.parse_args()
599
+ print_args(vars(opt))
600
+ return opt
601
+
602
+
603
+ def main(opt):
604
+ for opt.weights in (opt.weights if isinstance(opt.weights, list) else [opt.weights]):
605
+ run(**vars(opt))
606
+
607
+
608
+ if __name__ == "__main__":
609
+ opt = parse_opt()
610
+ main(opt)
hubconf.py ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ """
3
+ PyTorch Hub models https://pytorch.org/hub/ultralytics_yolov5/
4
+
5
+ Usage:
6
+ import torch
7
+ model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
8
+ model = torch.hub.load('ultralytics/yolov5:master', 'custom', 'path/to/yolov5s.onnx') # file from branch
9
+ """
10
+
11
+ import torch
12
+
13
+
14
+ def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None):
15
+ """Creates or loads a YOLOv5 model
16
+
17
+ Arguments:
18
+ name (str): model name 'yolov5s' or path 'path/to/best.pt'
19
+ pretrained (bool): load pretrained weights into the model
20
+ channels (int): number of input channels
21
+ classes (int): number of model classes
22
+ autoshape (bool): apply YOLOv5 .autoshape() wrapper to model
23
+ verbose (bool): print all information to screen
24
+ device (str, torch.device, None): device to use for model parameters
25
+
26
+ Returns:
27
+ YOLOv5 model
28
+ """
29
+ from pathlib import Path
30
+
31
+ from models.common import AutoShape, DetectMultiBackend
32
+ from models.yolo import Model
33
+ from utils.downloads import attempt_download
34
+ from utils.general import LOGGER, check_requirements, intersect_dicts, logging
35
+ from utils.torch_utils import select_device
36
+
37
+ if not verbose:
38
+ LOGGER.setLevel(logging.WARNING)
39
+ check_requirements(exclude=('tensorboard', 'thop', 'opencv-python'))
40
+ name = Path(name)
41
+ path = name.with_suffix('.pt') if name.suffix == '' and not name.is_dir() else name # checkpoint path
42
+ try:
43
+ device = select_device(device)
44
+
45
+ if pretrained and channels == 3 and classes == 80:
46
+ model = DetectMultiBackend(path, device=device, fuse=autoshape) # download/load FP32 model
47
+ # model = models.experimental.attempt_load(path, map_location=device) # download/load FP32 model
48
+ else:
49
+ cfg = list((Path(__file__).parent / 'models').rglob(f'{path.stem}.yaml'))[0] # model.yaml path
50
+ model = Model(cfg, channels, classes) # create model
51
+ if pretrained:
52
+ ckpt = torch.load(attempt_download(path), map_location=device) # load
53
+ csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32
54
+ csd = intersect_dicts(csd, model.state_dict(), exclude=['anchors']) # intersect
55
+ model.load_state_dict(csd, strict=False) # load
56
+ if len(ckpt['model'].names) == classes:
57
+ model.names = ckpt['model'].names # set class names attribute
58
+ if autoshape:
59
+ model = AutoShape(model) # for file/URI/PIL/cv2/np inputs and NMS
60
+ return model.to(device)
61
+
62
+ except Exception as e:
63
+ help_url = 'https://github.com/ultralytics/yolov5/issues/36'
64
+ s = f'{e}. Cache may be out of date, try `force_reload=True` or see {help_url} for help.'
65
+ raise Exception(s) from e
66
+
67
+
68
+ def custom(path='path/to/model.pt', autoshape=True, _verbose=True, device=None):
69
+ # YOLOv5 custom or local model
70
+ return _create(path, autoshape=autoshape, verbose=_verbose, device=device)
71
+
72
+
73
+ def yolov5n(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
74
+ # YOLOv5-nano model https://github.com/ultralytics/yolov5
75
+ return _create('yolov5n', pretrained, channels, classes, autoshape, _verbose, device)
76
+
77
+
78
+ def yolov5s(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
79
+ # YOLOv5-small model https://github.com/ultralytics/yolov5
80
+ return _create('yolov5s', pretrained, channels, classes, autoshape, _verbose, device)
81
+
82
+
83
+ def yolov5m(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
84
+ # YOLOv5-medium model https://github.com/ultralytics/yolov5
85
+ return _create('yolov5m', pretrained, channels, classes, autoshape, _verbose, device)
86
+
87
+
88
+ def yolov5l(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
89
+ # YOLOv5-large model https://github.com/ultralytics/yolov5
90
+ return _create('yolov5l', pretrained, channels, classes, autoshape, _verbose, device)
91
+
92
+
93
+ def yolov5x(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
94
+ # YOLOv5-xlarge model https://github.com/ultralytics/yolov5
95
+ return _create('yolov5x', pretrained, channels, classes, autoshape, _verbose, device)
96
+
97
+
98
+ def yolov5n6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
99
+ # YOLOv5-nano-P6 model https://github.com/ultralytics/yolov5
100
+ return _create('yolov5n6', pretrained, channels, classes, autoshape, _verbose, device)
101
+
102
+
103
+ def yolov5s6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
104
+ # YOLOv5-small-P6 model https://github.com/ultralytics/yolov5
105
+ return _create('yolov5s6', pretrained, channels, classes, autoshape, _verbose, device)
106
+
107
+
108
+ def yolov5m6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
109
+ # YOLOv5-medium-P6 model https://github.com/ultralytics/yolov5
110
+ return _create('yolov5m6', pretrained, channels, classes, autoshape, _verbose, device)
111
+
112
+
113
+ def yolov5l6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
114
+ # YOLOv5-large-P6 model https://github.com/ultralytics/yolov5
115
+ return _create('yolov5l6', pretrained, channels, classes, autoshape, _verbose, device)
116
+
117
+
118
+ def yolov5x6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None):
119
+ # YOLOv5-xlarge-P6 model https://github.com/ultralytics/yolov5
120
+ return _create('yolov5x6', pretrained, channels, classes, autoshape, _verbose, device)
121
+
122
+
123
+ if __name__ == '__main__':
124
+ model = _create(name='yolov5s', pretrained=True, channels=3, classes=80, autoshape=True, verbose=True)
125
+ # model = custom(path='path/to/model.pt') # custom
126
+
127
+ # Verify inference
128
+ from pathlib import Path
129
+
130
+ import numpy as np
131
+ from PIL import Image
132
+
133
+ from utils.general import cv2
134
+
135
+ imgs = [
136
+ 'data/images/zidane.jpg', # filename
137
+ Path('data/images/zidane.jpg'), # Path
138
+ 'https://ultralytics.com/images/zidane.jpg', # URI
139
+ cv2.imread('data/images/bus.jpg')[:, :, ::-1], # OpenCV
140
+ Image.open('data/images/bus.jpg'), # PIL
141
+ np.zeros((320, 640, 3))] # numpy
142
+
143
+ results = model(imgs, size=320) # batched inference
144
+ results.print()
145
+ results.save()
models/__init__.py ADDED
File without changes
models/common.py ADDED
@@ -0,0 +1,748 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ """
3
+ Common modules
4
+ """
5
+
6
+ import json
7
+ import math
8
+ import platform
9
+ import warnings
10
+ from collections import OrderedDict, namedtuple
11
+ from copy import copy
12
+ from pathlib import Path
13
+
14
+ import cv2
15
+ import numpy as np
16
+ import pandas as pd
17
+ import requests
18
+ import torch
19
+ import torch.nn as nn
20
+ import yaml
21
+ from PIL import Image
22
+ from torch.cuda import amp
23
+
24
+ from utils.dataloaders import exif_transpose, letterbox
25
+ from utils.general import (LOGGER, check_requirements, check_suffix, check_version, colorstr, increment_path,
26
+ make_divisible, non_max_suppression, scale_coords, xywh2xyxy, xyxy2xywh)
27
+ from utils.plots import Annotator, colors, save_one_box
28
+ from utils.torch_utils import copy_attr, time_sync
29
+
30
+
31
+ def autopad(k, p=None): # kernel, padding
32
+ # Pad to 'same'
33
+ if p is None:
34
+ p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
35
+ return p
36
+
37
+
38
+ class Conv(nn.Module):
39
+ # Standard convolution
40
+ def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
41
+ super().__init__()
42
+ self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
43
+ self.bn = nn.BatchNorm2d(c2)
44
+ self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())
45
+
46
+ def forward(self, x):
47
+ return self.act(self.bn(self.conv(x)))
48
+
49
+ def forward_fuse(self, x):
50
+ return self.act(self.conv(x))
51
+
52
+
53
+ class DWConv(Conv):
54
+ # Depth-wise convolution class
55
+ def __init__(self, c1, c2, k=1, s=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
56
+ super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), act=act)
57
+
58
+
59
+ class DWConvTranspose2d(nn.ConvTranspose2d):
60
+ # Depth-wise transpose convolution class
61
+ def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0): # ch_in, ch_out, kernel, stride, padding, padding_out
62
+ super().__init__(c1, c2, k, s, p1, p2, groups=math.gcd(c1, c2))
63
+
64
+
65
+ class TransformerLayer(nn.Module):
66
+ # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance)
67
+ def __init__(self, c, num_heads):
68
+ super().__init__()
69
+ self.q = nn.Linear(c, c, bias=False)
70
+ self.k = nn.Linear(c, c, bias=False)
71
+ self.v = nn.Linear(c, c, bias=False)
72
+ self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads)
73
+ self.fc1 = nn.Linear(c, c, bias=False)
74
+ self.fc2 = nn.Linear(c, c, bias=False)
75
+
76
+ def forward(self, x):
77
+ x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x
78
+ x = self.fc2(self.fc1(x)) + x
79
+ return x
80
+
81
+
82
+ class TransformerBlock(nn.Module):
83
+ # Vision Transformer https://arxiv.org/abs/2010.11929
84
+ def __init__(self, c1, c2, num_heads, num_layers):
85
+ super().__init__()
86
+ self.conv = None
87
+ if c1 != c2:
88
+ self.conv = Conv(c1, c2)
89
+ self.linear = nn.Linear(c2, c2) # learnable position embedding
90
+ self.tr = nn.Sequential(*(TransformerLayer(c2, num_heads) for _ in range(num_layers)))
91
+ self.c2 = c2
92
+
93
+ def forward(self, x):
94
+ if self.conv is not None:
95
+ x = self.conv(x)
96
+ b, _, w, h = x.shape
97
+ p = x.flatten(2).permute(2, 0, 1)
98
+ return self.tr(p + self.linear(p)).permute(1, 2, 0).reshape(b, self.c2, w, h)
99
+
100
+
101
+ class Bottleneck(nn.Module):
102
+ # Standard bottleneck
103
+ def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion
104
+ super().__init__()
105
+ c_ = int(c2 * e) # hidden channels
106
+ self.cv1 = Conv(c1, c_, 1, 1)
107
+ self.cv2 = Conv(c_, c2, 3, 1, g=g)
108
+ self.add = shortcut and c1 == c2
109
+
110
+ def forward(self, x):
111
+ return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
112
+
113
+
114
+ class BottleneckCSP(nn.Module):
115
+ # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
116
+ def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
117
+ super().__init__()
118
+ c_ = int(c2 * e) # hidden channels
119
+ self.cv1 = Conv(c1, c_, 1, 1)
120
+ self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False)
121
+ self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False)
122
+ self.cv4 = Conv(2 * c_, c2, 1, 1)
123
+ self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3)
124
+ self.act = nn.SiLU()
125
+ self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))
126
+
127
+ def forward(self, x):
128
+ y1 = self.cv3(self.m(self.cv1(x)))
129
+ y2 = self.cv2(x)
130
+ return self.cv4(self.act(self.bn(torch.cat((y1, y2), 1))))
131
+
132
+
133
+ class CrossConv(nn.Module):
134
+ # Cross Convolution Downsample
135
+ def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False):
136
+ # ch_in, ch_out, kernel, stride, groups, expansion, shortcut
137
+ super().__init__()
138
+ c_ = int(c2 * e) # hidden channels
139
+ self.cv1 = Conv(c1, c_, (1, k), (1, s))
140
+ self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g)
141
+ self.add = shortcut and c1 == c2
142
+
143
+ def forward(self, x):
144
+ return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
145
+
146
+
147
+ class C3(nn.Module):
148
+ # CSP Bottleneck with 3 convolutions
149
+ def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
150
+ super().__init__()
151
+ c_ = int(c2 * e) # hidden channels
152
+ self.cv1 = Conv(c1, c_, 1, 1)
153
+ self.cv2 = Conv(c1, c_, 1, 1)
154
+ self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2)
155
+ self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)))
156
+
157
+ def forward(self, x):
158
+ return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1))
159
+
160
+
161
+ class C3x(C3):
162
+ # C3 module with cross-convolutions
163
+ def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
164
+ super().__init__(c1, c2, n, shortcut, g, e)
165
+ c_ = int(c2 * e)
166
+ self.m = nn.Sequential(*(CrossConv(c_, c_, 3, 1, g, 1.0, shortcut) for _ in range(n)))
167
+
168
+
169
+ class C3TR(C3):
170
+ # C3 module with TransformerBlock()
171
+ def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
172
+ super().__init__(c1, c2, n, shortcut, g, e)
173
+ c_ = int(c2 * e)
174
+ self.m = TransformerBlock(c_, c_, 4, n)
175
+
176
+
177
+ class C3SPP(C3):
178
+ # C3 module with SPP()
179
+ def __init__(self, c1, c2, k=(5, 9, 13), n=1, shortcut=True, g=1, e=0.5):
180
+ super().__init__(c1, c2, n, shortcut, g, e)
181
+ c_ = int(c2 * e)
182
+ self.m = SPP(c_, c_, k)
183
+
184
+
185
+ class C3Ghost(C3):
186
+ # C3 module with GhostBottleneck()
187
+ def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
188
+ super().__init__(c1, c2, n, shortcut, g, e)
189
+ c_ = int(c2 * e) # hidden channels
190
+ self.m = nn.Sequential(*(GhostBottleneck(c_, c_) for _ in range(n)))
191
+
192
+
193
+ class SPP(nn.Module):
194
+ # Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729
195
+ def __init__(self, c1, c2, k=(5, 9, 13)):
196
+ super().__init__()
197
+ c_ = c1 // 2 # hidden channels
198
+ self.cv1 = Conv(c1, c_, 1, 1)
199
+ self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)
200
+ self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])
201
+
202
+ def forward(self, x):
203
+ x = self.cv1(x)
204
+ with warnings.catch_warnings():
205
+ warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
206
+ return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))
207
+
208
+
209
+ class SPPF(nn.Module):
210
+ # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
211
+ def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13))
212
+ super().__init__()
213
+ c_ = c1 // 2 # hidden channels
214
+ self.cv1 = Conv(c1, c_, 1, 1)
215
+ self.cv2 = Conv(c_ * 4, c2, 1, 1)
216
+ self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)
217
+
218
+ def forward(self, x):
219
+ x = self.cv1(x)
220
+ with warnings.catch_warnings():
221
+ warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning
222
+ y1 = self.m(x)
223
+ y2 = self.m(y1)
224
+ return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1))
225
+
226
+
227
+ class Focus(nn.Module):
228
+ # Focus wh information into c-space
229
+ def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
230
+ super().__init__()
231
+ self.conv = Conv(c1 * 4, c2, k, s, p, g, act)
232
+ # self.contract = Contract(gain=2)
233
+
234
+ def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2)
235
+ return self.conv(torch.cat((x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]), 1))
236
+ # return self.conv(self.contract(x))
237
+
238
+
239
+ class GhostConv(nn.Module):
240
+ # Ghost Convolution https://github.com/huawei-noah/ghostnet
241
+ def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups
242
+ super().__init__()
243
+ c_ = c2 // 2 # hidden channels
244
+ self.cv1 = Conv(c1, c_, k, s, None, g, act)
245
+ self.cv2 = Conv(c_, c_, 5, 1, None, c_, act)
246
+
247
+ def forward(self, x):
248
+ y = self.cv1(x)
249
+ return torch.cat((y, self.cv2(y)), 1)
250
+
251
+
252
+ class GhostBottleneck(nn.Module):
253
+ # Ghost Bottleneck https://github.com/huawei-noah/ghostnet
254
+ def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride
255
+ super().__init__()
256
+ c_ = c2 // 2
257
+ self.conv = nn.Sequential(
258
+ GhostConv(c1, c_, 1, 1), # pw
259
+ DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw
260
+ GhostConv(c_, c2, 1, 1, act=False)) # pw-linear
261
+ self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), Conv(c1, c2, 1, 1,
262
+ act=False)) if s == 2 else nn.Identity()
263
+
264
+ def forward(self, x):
265
+ return self.conv(x) + self.shortcut(x)
266
+
267
+
268
+ class Contract(nn.Module):
269
+ # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40)
270
+ def __init__(self, gain=2):
271
+ super().__init__()
272
+ self.gain = gain
273
+
274
+ def forward(self, x):
275
+ b, c, h, w = x.size() # assert (h / s == 0) and (W / s == 0), 'Indivisible gain'
276
+ s = self.gain
277
+ x = x.view(b, c, h // s, s, w // s, s) # x(1,64,40,2,40,2)
278
+ x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40)
279
+ return x.view(b, c * s * s, h // s, w // s) # x(1,256,40,40)
280
+
281
+
282
+ class Expand(nn.Module):
283
+ # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160)
284
+ def __init__(self, gain=2):
285
+ super().__init__()
286
+ self.gain = gain
287
+
288
+ def forward(self, x):
289
+ b, c, h, w = x.size() # assert C / s ** 2 == 0, 'Indivisible gain'
290
+ s = self.gain
291
+ x = x.view(b, s, s, c // s ** 2, h, w) # x(1,2,2,16,80,80)
292
+ x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2)
293
+ return x.view(b, c // s ** 2, h * s, w * s) # x(1,16,160,160)
294
+
295
+
296
+ class Concat(nn.Module):
297
+ # Concatenate a list of tensors along dimension
298
+ def __init__(self, dimension=1):
299
+ super().__init__()
300
+ self.d = dimension
301
+
302
+ def forward(self, x):
303
+ return torch.cat(x, self.d)
304
+
305
+
306
+ class DetectMultiBackend(nn.Module):
307
+ # YOLOv5 MultiBackend class for python inference on various backends
308
+ def __init__(self, weights='yolov5s.pt', device=torch.device('cpu'), dnn=False, data=None, fp16=False, fuse=True):
309
+ # Usage:
310
+ # PyTorch: weights = *.pt
311
+ # TorchScript: *.torchscript
312
+ # ONNX Runtime: *.onnx
313
+ # ONNX OpenCV DNN: *.onnx with --dnn
314
+ # OpenVINO: *.xml
315
+ # CoreML: *.mlmodel
316
+ # TensorRT: *.engine
317
+ # TensorFlow SavedModel: *_saved_model
318
+ # TensorFlow GraphDef: *.pb
319
+ # TensorFlow Lite: *.tflite
320
+ # TensorFlow Edge TPU: *_edgetpu.tflite
321
+ from models.experimental import attempt_download, attempt_load # scoped to avoid circular import
322
+
323
+ super().__init__()
324
+ w = str(weights[0] if isinstance(weights, list) else weights)
325
+ pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs = self.model_type(w) # get backend
326
+ w = attempt_download(w) # download if not local
327
+ fp16 &= (pt or jit or onnx or engine) and device.type != 'cpu' # FP16
328
+ stride, names = 32, [f'class{i}' for i in range(1000)] # assign defaults
329
+ if data: # assign class names (optional)
330
+ with open(data, errors='ignore') as f:
331
+ names = yaml.safe_load(f)['names']
332
+
333
+ if pt: # PyTorch
334
+ model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
335
+ stride = max(int(model.stride.max()), 32) # model stride
336
+ names = model.module.names if hasattr(model, 'module') else model.names # get class names
337
+ model.half() if fp16 else model.float()
338
+ self.model = model # explicitly assign for to(), cpu(), cuda(), half()
339
+ elif jit: # TorchScript
340
+ LOGGER.info(f'Loading {w} for TorchScript inference...')
341
+ extra_files = {'config.txt': ''} # model metadata
342
+ model = torch.jit.load(w, _extra_files=extra_files)
343
+ model.half() if fp16 else model.float()
344
+ if extra_files['config.txt']:
345
+ d = json.loads(extra_files['config.txt']) # extra_files dict
346
+ stride, names = int(d['stride']), d['names']
347
+ elif dnn: # ONNX OpenCV DNN
348
+ LOGGER.info(f'Loading {w} for ONNX OpenCV DNN inference...')
349
+ check_requirements(('opencv-python>=4.5.4',))
350
+ net = cv2.dnn.readNetFromONNX(w)
351
+ elif onnx: # ONNX Runtime
352
+ LOGGER.info(f'Loading {w} for ONNX Runtime inference...')
353
+ cuda = torch.cuda.is_available()
354
+ check_requirements(('onnx', 'onnxruntime-gpu' if cuda else 'onnxruntime'))
355
+ import onnxruntime
356
+ providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if cuda else ['CPUExecutionProvider']
357
+ session = onnxruntime.InferenceSession(w, providers=providers)
358
+ meta = session.get_modelmeta().custom_metadata_map # metadata
359
+ if 'stride' in meta:
360
+ stride, names = int(meta['stride']), eval(meta['names'])
361
+ elif xml: # OpenVINO
362
+ LOGGER.info(f'Loading {w} for OpenVINO inference...')
363
+ check_requirements(('openvino',)) # requires openvino-dev: https://pypi.org/project/openvino-dev/
364
+ from openvino.runtime import Core, Layout, get_batch
365
+ ie = Core()
366
+ if not Path(w).is_file(): # if not *.xml
367
+ w = next(Path(w).glob('*.xml')) # get *.xml file from *_openvino_model dir
368
+ network = ie.read_model(model=w, weights=Path(w).with_suffix('.bin'))
369
+ if network.get_parameters()[0].get_layout().empty:
370
+ network.get_parameters()[0].set_layout(Layout("NCHW"))
371
+ batch_dim = get_batch(network)
372
+ if batch_dim.is_static:
373
+ batch_size = batch_dim.get_length()
374
+ executable_network = ie.compile_model(network, device_name="CPU") # device_name="MYRIAD" for Intel NCS2
375
+ output_layer = next(iter(executable_network.outputs))
376
+ meta = Path(w).with_suffix('.yaml')
377
+ if meta.exists():
378
+ stride, names = self._load_metadata(meta) # load metadata
379
+ elif engine: # TensorRT
380
+ LOGGER.info(f'Loading {w} for TensorRT inference...')
381
+ import tensorrt as trt # https://developer.nvidia.com/nvidia-tensorrt-download
382
+ check_version(trt.__version__, '7.0.0', hard=True) # require tensorrt>=7.0.0
383
+ Binding = namedtuple('Binding', ('name', 'dtype', 'shape', 'data', 'ptr'))
384
+ logger = trt.Logger(trt.Logger.INFO)
385
+ with open(w, 'rb') as f, trt.Runtime(logger) as runtime:
386
+ model = runtime.deserialize_cuda_engine(f.read())
387
+ bindings = OrderedDict()
388
+ fp16 = False # default updated below
389
+ for index in range(model.num_bindings):
390
+ name = model.get_binding_name(index)
391
+ dtype = trt.nptype(model.get_binding_dtype(index))
392
+ shape = tuple(model.get_binding_shape(index))
393
+ data = torch.from_numpy(np.empty(shape, dtype=np.dtype(dtype))).to(device)
394
+ bindings[name] = Binding(name, dtype, shape, data, int(data.data_ptr()))
395
+ if model.binding_is_input(index) and dtype == np.float16:
396
+ fp16 = True
397
+ binding_addrs = OrderedDict((n, d.ptr) for n, d in bindings.items())
398
+ context = model.create_execution_context()
399
+ batch_size = bindings['images'].shape[0]
400
+ elif coreml: # CoreML
401
+ LOGGER.info(f'Loading {w} for CoreML inference...')
402
+ import coremltools as ct
403
+ model = ct.models.MLModel(w)
404
+ else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU)
405
+ if saved_model: # SavedModel
406
+ LOGGER.info(f'Loading {w} for TensorFlow SavedModel inference...')
407
+ import tensorflow as tf
408
+ keras = False # assume TF1 saved_model
409
+ model = tf.keras.models.load_model(w) if keras else tf.saved_model.load(w)
410
+ elif pb: # GraphDef https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt
411
+ LOGGER.info(f'Loading {w} for TensorFlow GraphDef inference...')
412
+ import tensorflow as tf
413
+
414
+ def wrap_frozen_graph(gd, inputs, outputs):
415
+ x = tf.compat.v1.wrap_function(lambda: tf.compat.v1.import_graph_def(gd, name=""), []) # wrapped
416
+ ge = x.graph.as_graph_element
417
+ return x.prune(tf.nest.map_structure(ge, inputs), tf.nest.map_structure(ge, outputs))
418
+
419
+ gd = tf.Graph().as_graph_def() # graph_def
420
+ with open(w, 'rb') as f:
421
+ gd.ParseFromString(f.read())
422
+ frozen_func = wrap_frozen_graph(gd, inputs="x:0", outputs="Identity:0")
423
+ elif tflite or edgetpu: # https://www.tensorflow.org/lite/guide/python#install_tensorflow_lite_for_python
424
+ try: # https://coral.ai/docs/edgetpu/tflite-python/#update-existing-tf-lite-code-for-the-edge-tpu
425
+ from tflite_runtime.interpreter import Interpreter, load_delegate
426
+ except ImportError:
427
+ import tensorflow as tf
428
+ Interpreter, load_delegate = tf.lite.Interpreter, tf.lite.experimental.load_delegate,
429
+ if edgetpu: # Edge TPU https://coral.ai/software/#edgetpu-runtime
430
+ LOGGER.info(f'Loading {w} for TensorFlow Lite Edge TPU inference...')
431
+ delegate = {
432
+ 'Linux': 'libedgetpu.so.1',
433
+ 'Darwin': 'libedgetpu.1.dylib',
434
+ 'Windows': 'edgetpu.dll'}[platform.system()]
435
+ interpreter = Interpreter(model_path=w, experimental_delegates=[load_delegate(delegate)])
436
+ else: # Lite
437
+ LOGGER.info(f'Loading {w} for TensorFlow Lite inference...')
438
+ interpreter = Interpreter(model_path=w) # load TFLite model
439
+ interpreter.allocate_tensors() # allocate
440
+ input_details = interpreter.get_input_details() # inputs
441
+ output_details = interpreter.get_output_details() # outputs
442
+ elif tfjs:
443
+ raise Exception('ERROR: YOLOv5 TF.js inference is not supported')
444
+ else:
445
+ raise Exception(f'ERROR: {w} is not a supported format')
446
+ self.__dict__.update(locals()) # assign all variables to self
447
+
448
+ def forward(self, im, augment=False, visualize=False, val=False):
449
+ # YOLOv5 MultiBackend inference
450
+ b, ch, h, w = im.shape # batch, channel, height, width
451
+ if self.fp16 and im.dtype != torch.float16:
452
+ im = im.half() # to FP16
453
+
454
+ if self.pt: # PyTorch
455
+ y = self.model(im, augment=augment, visualize=visualize)[0]
456
+ elif self.jit: # TorchScript
457
+ y = self.model(im)[0]
458
+ elif self.dnn: # ONNX OpenCV DNN
459
+ im = im.cpu().numpy() # torch to numpy
460
+ self.net.setInput(im)
461
+ y = self.net.forward()
462
+ elif self.onnx: # ONNX Runtime
463
+ im = im.cpu().numpy() # torch to numpy
464
+ y = self.session.run([self.session.get_outputs()[0].name], {self.session.get_inputs()[0].name: im})[0]
465
+ elif self.xml: # OpenVINO
466
+ im = im.cpu().numpy() # FP32
467
+ y = self.executable_network([im])[self.output_layer]
468
+ elif self.engine: # TensorRT
469
+ assert im.shape == self.bindings['images'].shape, (im.shape, self.bindings['images'].shape)
470
+ self.binding_addrs['images'] = int(im.data_ptr())
471
+ self.context.execute_v2(list(self.binding_addrs.values()))
472
+ y = self.bindings['output'].data
473
+ elif self.coreml: # CoreML
474
+ im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3)
475
+ im = Image.fromarray((im[0] * 255).astype('uint8'))
476
+ # im = im.resize((192, 320), Image.ANTIALIAS)
477
+ y = self.model.predict({'image': im}) # coordinates are xywh normalized
478
+ if 'confidence' in y:
479
+ box = xywh2xyxy(y['coordinates'] * [[w, h, w, h]]) # xyxy pixels
480
+ conf, cls = y['confidence'].max(1), y['confidence'].argmax(1).astype(np.float)
481
+ y = np.concatenate((box, conf.reshape(-1, 1), cls.reshape(-1, 1)), 1)
482
+ else:
483
+ k = 'var_' + str(sorted(int(k.replace('var_', '')) for k in y)[-1]) # output key
484
+ y = y[k] # output
485
+ else: # TensorFlow (SavedModel, GraphDef, Lite, Edge TPU)
486
+ im = im.permute(0, 2, 3, 1).cpu().numpy() # torch BCHW to numpy BHWC shape(1,320,192,3)
487
+ if self.saved_model: # SavedModel
488
+ y = (self.model(im, training=False) if self.keras else self.model(im)).numpy()
489
+ elif self.pb: # GraphDef
490
+ y = self.frozen_func(x=self.tf.constant(im)).numpy()
491
+ else: # Lite or Edge TPU
492
+ input, output = self.input_details[0], self.output_details[0]
493
+ int8 = input['dtype'] == np.uint8 # is TFLite quantized uint8 model
494
+ if int8:
495
+ scale, zero_point = input['quantization']
496
+ im = (im / scale + zero_point).astype(np.uint8) # de-scale
497
+ self.interpreter.set_tensor(input['index'], im)
498
+ self.interpreter.invoke()
499
+ y = self.interpreter.get_tensor(output['index'])
500
+ if int8:
501
+ scale, zero_point = output['quantization']
502
+ y = (y.astype(np.float32) - zero_point) * scale # re-scale
503
+ y[..., :4] *= [w, h, w, h] # xywh normalized to pixels
504
+
505
+ if isinstance(y, np.ndarray):
506
+ y = torch.tensor(y, device=self.device)
507
+ return (y, []) if val else y
508
+
509
+ def warmup(self, imgsz=(1, 3, 640, 640)):
510
+ # Warmup model by running inference once
511
+ warmup_types = self.pt, self.jit, self.onnx, self.engine, self.saved_model, self.pb
512
+ if any(warmup_types) and self.device.type != 'cpu':
513
+ im = torch.zeros(*imgsz, dtype=torch.half if self.fp16 else torch.float, device=self.device) # input
514
+ for _ in range(2 if self.jit else 1): #
515
+ self.forward(im) # warmup
516
+
517
+ @staticmethod
518
+ def model_type(p='path/to/model.pt'):
519
+ # Return model type from model path, i.e. path='path/to/model.onnx' -> type=onnx
520
+ from export import export_formats
521
+ suffixes = list(export_formats().Suffix) + ['.xml'] # export suffixes
522
+ check_suffix(p, suffixes) # checks
523
+ p = Path(p).name # eliminate trailing separators
524
+ pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, xml2 = (s in p for s in suffixes)
525
+ xml |= xml2 # *_openvino_model or *.xml
526
+ tflite &= not edgetpu # *.tflite
527
+ return pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs
528
+
529
+ @staticmethod
530
+ def _load_metadata(f='path/to/meta.yaml'):
531
+ # Load metadata from meta.yaml if it exists
532
+ with open(f, errors='ignore') as f:
533
+ d = yaml.safe_load(f)
534
+ return d['stride'], d['names'] # assign stride, names
535
+
536
+
537
+ class AutoShape(nn.Module):
538
+ # YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
539
+ conf = 0.25 # NMS confidence threshold
540
+ iou = 0.45 # NMS IoU threshold
541
+ agnostic = False # NMS class-agnostic
542
+ multi_label = False # NMS multiple labels per box
543
+ classes = None # (optional list) filter by class, i.e. = [0, 15, 16] for COCO persons, cats and dogs
544
+ max_det = 1000 # maximum number of detections per image
545
+ amp = False # Automatic Mixed Precision (AMP) inference
546
+
547
+ def __init__(self, model, verbose=True):
548
+ super().__init__()
549
+ if verbose:
550
+ LOGGER.info('Adding AutoShape... ')
551
+ copy_attr(self, model, include=('yaml', 'nc', 'hyp', 'names', 'stride', 'abc'), exclude=()) # copy attributes
552
+ self.dmb = isinstance(model, DetectMultiBackend) # DetectMultiBackend() instance
553
+ self.pt = not self.dmb or model.pt # PyTorch model
554
+ self.model = model.eval()
555
+
556
+ def _apply(self, fn):
557
+ # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
558
+ self = super()._apply(fn)
559
+ if self.pt:
560
+ m = self.model.model.model[-1] if self.dmb else self.model.model[-1] # Detect()
561
+ m.stride = fn(m.stride)
562
+ m.grid = list(map(fn, m.grid))
563
+ if isinstance(m.anchor_grid, list):
564
+ m.anchor_grid = list(map(fn, m.anchor_grid))
565
+ return self
566
+
567
+ @torch.no_grad()
568
+ def forward(self, imgs, size=640, augment=False, profile=False):
569
+ # Inference from various sources. For height=640, width=1280, RGB images example inputs are:
570
+ # file: imgs = 'data/images/zidane.jpg' # str or PosixPath
571
+ # URI: = 'https://ultralytics.com/images/zidane.jpg'
572
+ # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3)
573
+ # PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3)
574
+ # numpy: = np.zeros((640,1280,3)) # HWC
575
+ # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values)
576
+ # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images
577
+
578
+ t = [time_sync()]
579
+ p = next(self.model.parameters()) if self.pt else torch.zeros(1, device=self.model.device) # for device, type
580
+ autocast = self.amp and (p.device.type != 'cpu') # Automatic Mixed Precision (AMP) inference
581
+ if isinstance(imgs, torch.Tensor): # torch
582
+ with amp.autocast(autocast):
583
+ return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference
584
+
585
+ # Pre-process
586
+ n, imgs = (len(imgs), list(imgs)) if isinstance(imgs, (list, tuple)) else (1, [imgs]) # number, list of images
587
+ shape0, shape1, files = [], [], [] # image and inference shapes, filenames
588
+ for i, im in enumerate(imgs):
589
+ f = f'image{i}' # filename
590
+ if isinstance(im, (str, Path)): # filename or uri
591
+ im, f = Image.open(requests.get(im, stream=True).raw if str(im).startswith('http') else im), im
592
+ im = np.asarray(exif_transpose(im))
593
+ elif isinstance(im, Image.Image): # PIL Image
594
+ im, f = np.asarray(exif_transpose(im)), getattr(im, 'filename', f) or f
595
+ files.append(Path(f).with_suffix('.jpg').name)
596
+ if im.shape[0] < 5: # image in CHW
597
+ im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1)
598
+ im = im[..., :3] if im.ndim == 3 else np.tile(im[..., None], 3) # enforce 3ch input
599
+ s = im.shape[:2] # HWC
600
+ shape0.append(s) # image shape
601
+ g = (size / max(s)) # gain
602
+ shape1.append([y * g for y in s])
603
+ imgs[i] = im if im.data.contiguous else np.ascontiguousarray(im) # update
604
+ shape1 = [make_divisible(x, self.stride) if self.pt else size for x in np.array(shape1).max(0)] # inf shape
605
+ x = [letterbox(im, shape1, auto=False)[0] for im in imgs] # pad
606
+ x = np.ascontiguousarray(np.array(x).transpose((0, 3, 1, 2))) # stack and BHWC to BCHW
607
+ x = torch.from_numpy(x).to(p.device).type_as(p) / 255 # uint8 to fp16/32
608
+ t.append(time_sync())
609
+
610
+ with amp.autocast(autocast):
611
+ # Inference
612
+ y = self.model(x, augment, profile) # forward
613
+ t.append(time_sync())
614
+
615
+ # Post-process
616
+ y = non_max_suppression(y if self.dmb else y[0],
617
+ self.conf,
618
+ self.iou,
619
+ self.classes,
620
+ self.agnostic,
621
+ self.multi_label,
622
+ max_det=self.max_det) # NMS
623
+ for i in range(n):
624
+ scale_coords(shape1, y[i][:, :4], shape0[i])
625
+
626
+ t.append(time_sync())
627
+ return Detections(imgs, y, files, t, self.names, x.shape)
628
+
629
+
630
+ class Detections:
631
+ # YOLOv5 detections class for inference results
632
+ def __init__(self, imgs, pred, files, times=(0, 0, 0, 0), names=None, shape=None):
633
+ super().__init__()
634
+ d = pred[0].device # device
635
+ gn = [torch.tensor([*(im.shape[i] for i in [1, 0, 1, 0]), 1, 1], device=d) for im in imgs] # normalizations
636
+ self.imgs = imgs # list of images as numpy arrays
637
+ self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls)
638
+ self.names = names # class names
639
+ self.files = files # image filenames
640
+ self.times = times # profiling times
641
+ self.xyxy = pred # xyxy pixels
642
+ self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels
643
+ self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized
644
+ self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized
645
+ self.n = len(self.pred) # number of images (batch size)
646
+ self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3)) # timestamps (ms)
647
+ self.s = shape # inference BCHW shape
648
+
649
+ def display(self, pprint=False, show=False, save=False, crop=False, render=False, labels=True, save_dir=Path('')):
650
+ crops = []
651
+ for i, (im, pred) in enumerate(zip(self.imgs, self.pred)):
652
+ s = f'image {i + 1}/{len(self.pred)}: {im.shape[0]}x{im.shape[1]} ' # string
653
+ if pred.shape[0]:
654
+ for c in pred[:, -1].unique():
655
+ n = (pred[:, -1] == c).sum() # detections per class
656
+ s += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string
657
+ if show or save or render or crop:
658
+ annotator = Annotator(im, example=str(self.names))
659
+ for *box, conf, cls in reversed(pred): # xyxy, confidence, class
660
+ label = f'{self.names[int(cls)]} {conf:.2f}'
661
+ if crop:
662
+ file = save_dir / 'crops' / self.names[int(cls)] / self.files[i] if save else None
663
+ crops.append({
664
+ 'box': box,
665
+ 'conf': conf,
666
+ 'cls': cls,
667
+ 'label': label,
668
+ 'im': save_one_box(box, im, file=file, save=save)})
669
+ else: # all others
670
+ annotator.box_label(box, label if labels else '', color=colors(cls))
671
+ im = annotator.im
672
+ else:
673
+ s += '(no detections)'
674
+
675
+ im = Image.fromarray(im.astype(np.uint8)) if isinstance(im, np.ndarray) else im # from np
676
+ if pprint:
677
+ print(s.rstrip(', '))
678
+ if show:
679
+ im.show(self.files[i]) # show
680
+ if save:
681
+ f = self.files[i]
682
+ im.save(save_dir / f) # save
683
+ if i == self.n - 1:
684
+ LOGGER.info(f"Saved {self.n} image{'s' * (self.n > 1)} to {colorstr('bold', save_dir)}")
685
+ if render:
686
+ self.imgs[i] = np.asarray(im)
687
+ if crop:
688
+ if save:
689
+ LOGGER.info(f'Saved results to {save_dir}\n')
690
+ return crops
691
+
692
+ def print(self):
693
+ self.display(pprint=True) # print results
694
+ print(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' % self.t)
695
+
696
+ def show(self, labels=True):
697
+ self.display(show=True, labels=labels) # show results
698
+
699
+ def save(self, labels=True, save_dir='runs/detect/exp'):
700
+ save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) # increment save_dir
701
+ self.display(save=True, labels=labels, save_dir=save_dir) # save results
702
+
703
+ def crop(self, save=True, save_dir='runs/detect/exp'):
704
+ save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) if save else None
705
+ return self.display(crop=True, save=save, save_dir=save_dir) # crop results
706
+
707
+ def render(self, labels=True):
708
+ self.display(render=True, labels=labels) # render results
709
+ return self.imgs
710
+
711
+ def pandas(self):
712
+ # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0])
713
+ new = copy(self) # return copy
714
+ ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns
715
+ cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns
716
+ for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]):
717
+ a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update
718
+ setattr(new, k, [pd.DataFrame(x, columns=c) for x in a])
719
+ return new
720
+
721
+ def tolist(self):
722
+ # return a list of Detections objects, i.e. 'for result in results.tolist():'
723
+ r = range(self.n) # iterable
724
+ x = [Detections([self.imgs[i]], [self.pred[i]], [self.files[i]], self.times, self.names, self.s) for i in r]
725
+ # for d in x:
726
+ # for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']:
727
+ # setattr(d, k, getattr(d, k)[0]) # pop out of list
728
+ return x
729
+
730
+ def __len__(self):
731
+ return self.n # override len(results)
732
+
733
+ def __str__(self):
734
+ self.print() # override print(results)
735
+ return ''
736
+
737
+
738
+ class Classify(nn.Module):
739
+ # Classification head, i.e. x(b,c1,20,20) to x(b,c2)
740
+ def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups
741
+ super().__init__()
742
+ self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1)
743
+ self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g) # to x(b,c2,1,1)
744
+ self.flat = nn.Flatten()
745
+
746
+ def forward(self, x):
747
+ z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list
748
+ return self.flat(self.conv(z)) # flatten to x(b,c2)
models/experimental.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ """
3
+ Experimental modules
4
+ """
5
+ import math
6
+
7
+ import numpy as np
8
+ import torch
9
+ import torch.nn as nn
10
+
11
+ from models.common import Conv
12
+ from utils.downloads import attempt_download
13
+
14
+
15
+ class Sum(nn.Module):
16
+ # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070
17
+ def __init__(self, n, weight=False): # n: number of inputs
18
+ super().__init__()
19
+ self.weight = weight # apply weights boolean
20
+ self.iter = range(n - 1) # iter object
21
+ if weight:
22
+ self.w = nn.Parameter(-torch.arange(1.0, n) / 2, requires_grad=True) # layer weights
23
+
24
+ def forward(self, x):
25
+ y = x[0] # no weight
26
+ if self.weight:
27
+ w = torch.sigmoid(self.w) * 2
28
+ for i in self.iter:
29
+ y = y + x[i + 1] * w[i]
30
+ else:
31
+ for i in self.iter:
32
+ y = y + x[i + 1]
33
+ return y
34
+
35
+
36
+ class MixConv2d(nn.Module):
37
+ # Mixed Depth-wise Conv https://arxiv.org/abs/1907.09595
38
+ def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): # ch_in, ch_out, kernel, stride, ch_strategy
39
+ super().__init__()
40
+ n = len(k) # number of convolutions
41
+ if equal_ch: # equal c_ per group
42
+ i = torch.linspace(0, n - 1E-6, c2).floor() # c2 indices
43
+ c_ = [(i == g).sum() for g in range(n)] # intermediate channels
44
+ else: # equal weight.numel() per group
45
+ b = [c2] + [0] * n
46
+ a = np.eye(n + 1, n, k=-1)
47
+ a -= np.roll(a, 1, axis=1)
48
+ a *= np.array(k) ** 2
49
+ a[0] = 1
50
+ c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b
51
+
52
+ self.m = nn.ModuleList([
53
+ nn.Conv2d(c1, int(c_), k, s, k // 2, groups=math.gcd(c1, int(c_)), bias=False) for k, c_ in zip(k, c_)])
54
+ self.bn = nn.BatchNorm2d(c2)
55
+ self.act = nn.SiLU()
56
+
57
+ def forward(self, x):
58
+ return self.act(self.bn(torch.cat([m(x) for m in self.m], 1)))
59
+
60
+
61
+ class Ensemble(nn.ModuleList):
62
+ # Ensemble of models
63
+ def __init__(self):
64
+ super().__init__()
65
+
66
+ def forward(self, x, augment=False, profile=False, visualize=False):
67
+ y = [module(x, augment, profile, visualize)[0] for module in self]
68
+ # y = torch.stack(y).max(0)[0] # max ensemble
69
+ # y = torch.stack(y).mean(0) # mean ensemble
70
+ y = torch.cat(y, 1) # nms ensemble
71
+ return y, None # inference, train output
72
+
73
+
74
+ def attempt_load(weights, device=None, inplace=True, fuse=True):
75
+ # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
76
+ from models.yolo import Detect, Model
77
+
78
+ model = Ensemble()
79
+ for w in weights if isinstance(weights, list) else [weights]:
80
+ ckpt = torch.load(attempt_download(w), map_location='cpu') # load
81
+ ckpt = (ckpt.get('ema') or ckpt['model']).to(device).float() # FP32 model
82
+ model.append(ckpt.fuse().eval() if fuse else ckpt.eval()) # fused or un-fused model in eval mode
83
+
84
+ # Compatibility updates
85
+ for m in model.modules():
86
+ t = type(m)
87
+ if t in (nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model):
88
+ m.inplace = inplace # torch 1.7.0 compatibility
89
+ if t is Detect and not isinstance(m.anchor_grid, list):
90
+ delattr(m, 'anchor_grid')
91
+ setattr(m, 'anchor_grid', [torch.zeros(1)] * m.nl)
92
+ elif t is Conv:
93
+ m._non_persistent_buffers_set = set() # torch 1.6.0 compatibility
94
+ elif t is nn.Upsample and not hasattr(m, 'recompute_scale_factor'):
95
+ m.recompute_scale_factor = None # torch 1.11.0 compatibility
96
+
97
+ if len(model) == 1:
98
+ return model[-1] # return model
99
+ print(f'Ensemble created with {weights}\n')
100
+ for k in 'names', 'nc', 'yaml':
101
+ setattr(model, k, getattr(model[0], k))
102
+ model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride
103
+ assert all(model[0].nc == m.nc for m in model), f'Models have different class counts: {[m.nc for m in model]}'
104
+ return model # return ensemble
models/hub/anchors.yaml ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ # Default anchors for COCO data
3
+
4
+
5
+ # P5 -------------------------------------------------------------------------------------------------------------------
6
+ # P5-640:
7
+ anchors_p5_640:
8
+ - [10,13, 16,30, 33,23] # P3/8
9
+ - [30,61, 62,45, 59,119] # P4/16
10
+ - [116,90, 156,198, 373,326] # P5/32
11
+
12
+
13
+ # P6 -------------------------------------------------------------------------------------------------------------------
14
+ # P6-640: thr=0.25: 0.9964 BPR, 5.54 anchors past thr, n=12, img_size=640, metric_all=0.281/0.716-mean/best, past_thr=0.469-mean: 9,11, 21,19, 17,41, 43,32, 39,70, 86,64, 65,131, 134,130, 120,265, 282,180, 247,354, 512,387
15
+ anchors_p6_640:
16
+ - [9,11, 21,19, 17,41] # P3/8
17
+ - [43,32, 39,70, 86,64] # P4/16
18
+ - [65,131, 134,130, 120,265] # P5/32
19
+ - [282,180, 247,354, 512,387] # P6/64
20
+
21
+ # P6-1280: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1280, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 19,27, 44,40, 38,94, 96,68, 86,152, 180,137, 140,301, 303,264, 238,542, 436,615, 739,380, 925,792
22
+ anchors_p6_1280:
23
+ - [19,27, 44,40, 38,94] # P3/8
24
+ - [96,68, 86,152, 180,137] # P4/16
25
+ - [140,301, 303,264, 238,542] # P5/32
26
+ - [436,615, 739,380, 925,792] # P6/64
27
+
28
+ # P6-1920: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1920, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 28,41, 67,59, 57,141, 144,103, 129,227, 270,205, 209,452, 455,396, 358,812, 653,922, 1109,570, 1387,1187
29
+ anchors_p6_1920:
30
+ - [28,41, 67,59, 57,141] # P3/8
31
+ - [144,103, 129,227, 270,205] # P4/16
32
+ - [209,452, 455,396, 358,812] # P5/32
33
+ - [653,922, 1109,570, 1387,1187] # P6/64
34
+
35
+
36
+ # P7 -------------------------------------------------------------------------------------------------------------------
37
+ # P7-640: thr=0.25: 0.9962 BPR, 6.76 anchors past thr, n=15, img_size=640, metric_all=0.275/0.733-mean/best, past_thr=0.466-mean: 11,11, 13,30, 29,20, 30,46, 61,38, 39,92, 78,80, 146,66, 79,163, 149,150, 321,143, 157,303, 257,402, 359,290, 524,372
38
+ anchors_p7_640:
39
+ - [11,11, 13,30, 29,20] # P3/8
40
+ - [30,46, 61,38, 39,92] # P4/16
41
+ - [78,80, 146,66, 79,163] # P5/32
42
+ - [149,150, 321,143, 157,303] # P6/64
43
+ - [257,402, 359,290, 524,372] # P7/128
44
+
45
+ # P7-1280: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1280, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 19,22, 54,36, 32,77, 70,83, 138,71, 75,173, 165,159, 148,334, 375,151, 334,317, 251,626, 499,474, 750,326, 534,814, 1079,818
46
+ anchors_p7_1280:
47
+ - [19,22, 54,36, 32,77] # P3/8
48
+ - [70,83, 138,71, 75,173] # P4/16
49
+ - [165,159, 148,334, 375,151] # P5/32
50
+ - [334,317, 251,626, 499,474] # P6/64
51
+ - [750,326, 534,814, 1079,818] # P7/128
52
+
53
+ # P7-1920: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1920, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 29,34, 81,55, 47,115, 105,124, 207,107, 113,259, 247,238, 222,500, 563,227, 501,476, 376,939, 749,711, 1126,489, 801,1222, 1618,1227
54
+ anchors_p7_1920:
55
+ - [29,34, 81,55, 47,115] # P3/8
56
+ - [105,124, 207,107, 113,259] # P4/16
57
+ - [247,238, 222,500, 563,227] # P5/32
58
+ - [501,476, 376,939, 749,711] # P6/64
59
+ - [1126,489, 801,1222, 1618,1227] # P7/128
models/hub/yolov3-spp.yaml ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 1.0 # model depth multiple
6
+ width_multiple: 1.0 # layer channel multiple
7
+ anchors:
8
+ - [10,13, 16,30, 33,23] # P3/8
9
+ - [30,61, 62,45, 59,119] # P4/16
10
+ - [116,90, 156,198, 373,326] # P5/32
11
+
12
+ # darknet53 backbone
13
+ backbone:
14
+ # [from, number, module, args]
15
+ [[-1, 1, Conv, [32, 3, 1]], # 0
16
+ [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
17
+ [-1, 1, Bottleneck, [64]],
18
+ [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
19
+ [-1, 2, Bottleneck, [128]],
20
+ [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
21
+ [-1, 8, Bottleneck, [256]],
22
+ [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
23
+ [-1, 8, Bottleneck, [512]],
24
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
25
+ [-1, 4, Bottleneck, [1024]], # 10
26
+ ]
27
+
28
+ # YOLOv3-SPP head
29
+ head:
30
+ [[-1, 1, Bottleneck, [1024, False]],
31
+ [-1, 1, SPP, [512, [5, 9, 13]]],
32
+ [-1, 1, Conv, [1024, 3, 1]],
33
+ [-1, 1, Conv, [512, 1, 1]],
34
+ [-1, 1, Conv, [1024, 3, 1]], # 15 (P5/32-large)
35
+
36
+ [-2, 1, Conv, [256, 1, 1]],
37
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
38
+ [[-1, 8], 1, Concat, [1]], # cat backbone P4
39
+ [-1, 1, Bottleneck, [512, False]],
40
+ [-1, 1, Bottleneck, [512, False]],
41
+ [-1, 1, Conv, [256, 1, 1]],
42
+ [-1, 1, Conv, [512, 3, 1]], # 22 (P4/16-medium)
43
+
44
+ [-2, 1, Conv, [128, 1, 1]],
45
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
46
+ [[-1, 6], 1, Concat, [1]], # cat backbone P3
47
+ [-1, 1, Bottleneck, [256, False]],
48
+ [-1, 2, Bottleneck, [256, False]], # 27 (P3/8-small)
49
+
50
+ [[27, 22, 15], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
51
+ ]
models/hub/yolov3-tiny.yaml ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 1.0 # model depth multiple
6
+ width_multiple: 1.0 # layer channel multiple
7
+ anchors:
8
+ - [10,14, 23,27, 37,58] # P4/16
9
+ - [81,82, 135,169, 344,319] # P5/32
10
+
11
+ # YOLOv3-tiny backbone
12
+ backbone:
13
+ # [from, number, module, args]
14
+ [[-1, 1, Conv, [16, 3, 1]], # 0
15
+ [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 1-P1/2
16
+ [-1, 1, Conv, [32, 3, 1]],
17
+ [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 3-P2/4
18
+ [-1, 1, Conv, [64, 3, 1]],
19
+ [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 5-P3/8
20
+ [-1, 1, Conv, [128, 3, 1]],
21
+ [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 7-P4/16
22
+ [-1, 1, Conv, [256, 3, 1]],
23
+ [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 9-P5/32
24
+ [-1, 1, Conv, [512, 3, 1]],
25
+ [-1, 1, nn.ZeroPad2d, [[0, 1, 0, 1]]], # 11
26
+ [-1, 1, nn.MaxPool2d, [2, 1, 0]], # 12
27
+ ]
28
+
29
+ # YOLOv3-tiny head
30
+ head:
31
+ [[-1, 1, Conv, [1024, 3, 1]],
32
+ [-1, 1, Conv, [256, 1, 1]],
33
+ [-1, 1, Conv, [512, 3, 1]], # 15 (P5/32-large)
34
+
35
+ [-2, 1, Conv, [128, 1, 1]],
36
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
37
+ [[-1, 8], 1, Concat, [1]], # cat backbone P4
38
+ [-1, 1, Conv, [256, 3, 1]], # 19 (P4/16-medium)
39
+
40
+ [[19, 15], 1, Detect, [nc, anchors]], # Detect(P4, P5)
41
+ ]
models/hub/yolov3.yaml ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 1.0 # model depth multiple
6
+ width_multiple: 1.0 # layer channel multiple
7
+ anchors:
8
+ - [10,13, 16,30, 33,23] # P3/8
9
+ - [30,61, 62,45, 59,119] # P4/16
10
+ - [116,90, 156,198, 373,326] # P5/32
11
+
12
+ # darknet53 backbone
13
+ backbone:
14
+ # [from, number, module, args]
15
+ [[-1, 1, Conv, [32, 3, 1]], # 0
16
+ [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
17
+ [-1, 1, Bottleneck, [64]],
18
+ [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
19
+ [-1, 2, Bottleneck, [128]],
20
+ [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
21
+ [-1, 8, Bottleneck, [256]],
22
+ [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
23
+ [-1, 8, Bottleneck, [512]],
24
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
25
+ [-1, 4, Bottleneck, [1024]], # 10
26
+ ]
27
+
28
+ # YOLOv3 head
29
+ head:
30
+ [[-1, 1, Bottleneck, [1024, False]],
31
+ [-1, 1, Conv, [512, 1, 1]],
32
+ [-1, 1, Conv, [1024, 3, 1]],
33
+ [-1, 1, Conv, [512, 1, 1]],
34
+ [-1, 1, Conv, [1024, 3, 1]], # 15 (P5/32-large)
35
+
36
+ [-2, 1, Conv, [256, 1, 1]],
37
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
38
+ [[-1, 8], 1, Concat, [1]], # cat backbone P4
39
+ [-1, 1, Bottleneck, [512, False]],
40
+ [-1, 1, Bottleneck, [512, False]],
41
+ [-1, 1, Conv, [256, 1, 1]],
42
+ [-1, 1, Conv, [512, 3, 1]], # 22 (P4/16-medium)
43
+
44
+ [-2, 1, Conv, [128, 1, 1]],
45
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
46
+ [[-1, 6], 1, Concat, [1]], # cat backbone P3
47
+ [-1, 1, Bottleneck, [256, False]],
48
+ [-1, 2, Bottleneck, [256, False]], # 27 (P3/8-small)
49
+
50
+ [[27, 22, 15], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
51
+ ]
models/hub/yolov5-bifpn.yaml ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 1.0 # model depth multiple
6
+ width_multiple: 1.0 # layer channel multiple
7
+ anchors:
8
+ - [10,13, 16,30, 33,23] # P3/8
9
+ - [30,61, 62,45, 59,119] # P4/16
10
+ - [116,90, 156,198, 373,326] # P5/32
11
+
12
+ # YOLOv5 v6.0 backbone
13
+ backbone:
14
+ # [from, number, module, args]
15
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
16
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
17
+ [-1, 3, C3, [128]],
18
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
19
+ [-1, 6, C3, [256]],
20
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
21
+ [-1, 9, C3, [512]],
22
+ [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
23
+ [-1, 3, C3, [1024]],
24
+ [-1, 1, SPPF, [1024, 5]], # 9
25
+ ]
26
+
27
+ # YOLOv5 v6.0 BiFPN head
28
+ head:
29
+ [[-1, 1, Conv, [512, 1, 1]],
30
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
31
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
32
+ [-1, 3, C3, [512, False]], # 13
33
+
34
+ [-1, 1, Conv, [256, 1, 1]],
35
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
36
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
37
+ [-1, 3, C3, [256, False]], # 17 (P3/8-small)
38
+
39
+ [-1, 1, Conv, [256, 3, 2]],
40
+ [[-1, 14, 6], 1, Concat, [1]], # cat P4 <--- BiFPN change
41
+ [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
42
+
43
+ [-1, 1, Conv, [512, 3, 2]],
44
+ [[-1, 10], 1, Concat, [1]], # cat head P5
45
+ [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
46
+
47
+ [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
48
+ ]
models/hub/yolov5-fpn.yaml ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 1.0 # model depth multiple
6
+ width_multiple: 1.0 # layer channel multiple
7
+ anchors:
8
+ - [10,13, 16,30, 33,23] # P3/8
9
+ - [30,61, 62,45, 59,119] # P4/16
10
+ - [116,90, 156,198, 373,326] # P5/32
11
+
12
+ # YOLOv5 v6.0 backbone
13
+ backbone:
14
+ # [from, number, module, args]
15
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
16
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
17
+ [-1, 3, C3, [128]],
18
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
19
+ [-1, 6, C3, [256]],
20
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
21
+ [-1, 9, C3, [512]],
22
+ [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
23
+ [-1, 3, C3, [1024]],
24
+ [-1, 1, SPPF, [1024, 5]], # 9
25
+ ]
26
+
27
+ # YOLOv5 v6.0 FPN head
28
+ head:
29
+ [[-1, 3, C3, [1024, False]], # 10 (P5/32-large)
30
+
31
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
32
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
33
+ [-1, 1, Conv, [512, 1, 1]],
34
+ [-1, 3, C3, [512, False]], # 14 (P4/16-medium)
35
+
36
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
37
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
38
+ [-1, 1, Conv, [256, 1, 1]],
39
+ [-1, 3, C3, [256, False]], # 18 (P3/8-small)
40
+
41
+ [[18, 14, 10], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
42
+ ]
models/hub/yolov5-p2.yaml ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 1.0 # model depth multiple
6
+ width_multiple: 1.0 # layer channel multiple
7
+ anchors: 3 # AutoAnchor evolves 3 anchors per P output layer
8
+
9
+ # YOLOv5 v6.0 backbone
10
+ backbone:
11
+ # [from, number, module, args]
12
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
13
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
14
+ [-1, 3, C3, [128]],
15
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
16
+ [-1, 6, C3, [256]],
17
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
18
+ [-1, 9, C3, [512]],
19
+ [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
20
+ [-1, 3, C3, [1024]],
21
+ [-1, 1, SPPF, [1024, 5]], # 9
22
+ ]
23
+
24
+ # YOLOv5 v6.0 head with (P2, P3, P4, P5) outputs
25
+ head:
26
+ [[-1, 1, Conv, [512, 1, 1]],
27
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
28
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
29
+ [-1, 3, C3, [512, False]], # 13
30
+
31
+ [-1, 1, Conv, [256, 1, 1]],
32
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
33
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
34
+ [-1, 3, C3, [256, False]], # 17 (P3/8-small)
35
+
36
+ [-1, 1, Conv, [128, 1, 1]],
37
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
38
+ [[-1, 2], 1, Concat, [1]], # cat backbone P2
39
+ [-1, 1, C3, [128, False]], # 21 (P2/4-xsmall)
40
+
41
+ [-1, 1, Conv, [128, 3, 2]],
42
+ [[-1, 18], 1, Concat, [1]], # cat head P3
43
+ [-1, 3, C3, [256, False]], # 24 (P3/8-small)
44
+
45
+ [-1, 1, Conv, [256, 3, 2]],
46
+ [[-1, 14], 1, Concat, [1]], # cat head P4
47
+ [-1, 3, C3, [512, False]], # 27 (P4/16-medium)
48
+
49
+ [-1, 1, Conv, [512, 3, 2]],
50
+ [[-1, 10], 1, Concat, [1]], # cat head P5
51
+ [-1, 3, C3, [1024, False]], # 30 (P5/32-large)
52
+
53
+ [[21, 24, 27, 30], 1, Detect, [nc, anchors]], # Detect(P2, P3, P4, P5)
54
+ ]
models/hub/yolov5-p34.yaml ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 0.33 # model depth multiple
6
+ width_multiple: 0.50 # layer channel multiple
7
+ anchors: 3 # AutoAnchor evolves 3 anchors per P output layer
8
+
9
+ # YOLOv5 v6.0 backbone
10
+ backbone:
11
+ # [from, number, module, args]
12
+ [ [ -1, 1, Conv, [ 64, 6, 2, 2 ] ], # 0-P1/2
13
+ [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
14
+ [ -1, 3, C3, [ 128 ] ],
15
+ [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
16
+ [ -1, 6, C3, [ 256 ] ],
17
+ [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
18
+ [ -1, 9, C3, [ 512 ] ],
19
+ [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 7-P5/32
20
+ [ -1, 3, C3, [ 1024 ] ],
21
+ [ -1, 1, SPPF, [ 1024, 5 ] ], # 9
22
+ ]
23
+
24
+ # YOLOv5 v6.0 head with (P3, P4) outputs
25
+ head:
26
+ [ [ -1, 1, Conv, [ 512, 1, 1 ] ],
27
+ [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
28
+ [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
29
+ [ -1, 3, C3, [ 512, False ] ], # 13
30
+
31
+ [ -1, 1, Conv, [ 256, 1, 1 ] ],
32
+ [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
33
+ [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
34
+ [ -1, 3, C3, [ 256, False ] ], # 17 (P3/8-small)
35
+
36
+ [ -1, 1, Conv, [ 256, 3, 2 ] ],
37
+ [ [ -1, 14 ], 1, Concat, [ 1 ] ], # cat head P4
38
+ [ -1, 3, C3, [ 512, False ] ], # 20 (P4/16-medium)
39
+
40
+ [ [ 17, 20 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4)
41
+ ]
models/hub/yolov5-p6.yaml ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 1.0 # model depth multiple
6
+ width_multiple: 1.0 # layer channel multiple
7
+ anchors: 3 # AutoAnchor evolves 3 anchors per P output layer
8
+
9
+ # YOLOv5 v6.0 backbone
10
+ backbone:
11
+ # [from, number, module, args]
12
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
13
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
14
+ [-1, 3, C3, [128]],
15
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
16
+ [-1, 6, C3, [256]],
17
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
18
+ [-1, 9, C3, [512]],
19
+ [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
20
+ [-1, 3, C3, [768]],
21
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
22
+ [-1, 3, C3, [1024]],
23
+ [-1, 1, SPPF, [1024, 5]], # 11
24
+ ]
25
+
26
+ # YOLOv5 v6.0 head with (P3, P4, P5, P6) outputs
27
+ head:
28
+ [[-1, 1, Conv, [768, 1, 1]],
29
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
30
+ [[-1, 8], 1, Concat, [1]], # cat backbone P5
31
+ [-1, 3, C3, [768, False]], # 15
32
+
33
+ [-1, 1, Conv, [512, 1, 1]],
34
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
35
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
36
+ [-1, 3, C3, [512, False]], # 19
37
+
38
+ [-1, 1, Conv, [256, 1, 1]],
39
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
40
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
41
+ [-1, 3, C3, [256, False]], # 23 (P3/8-small)
42
+
43
+ [-1, 1, Conv, [256, 3, 2]],
44
+ [[-1, 20], 1, Concat, [1]], # cat head P4
45
+ [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
46
+
47
+ [-1, 1, Conv, [512, 3, 2]],
48
+ [[-1, 16], 1, Concat, [1]], # cat head P5
49
+ [-1, 3, C3, [768, False]], # 29 (P5/32-large)
50
+
51
+ [-1, 1, Conv, [768, 3, 2]],
52
+ [[-1, 12], 1, Concat, [1]], # cat head P6
53
+ [-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
54
+
55
+ [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
56
+ ]
models/hub/yolov5-p7.yaml ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 1.0 # model depth multiple
6
+ width_multiple: 1.0 # layer channel multiple
7
+ anchors: 3 # AutoAnchor evolves 3 anchors per P output layer
8
+
9
+ # YOLOv5 v6.0 backbone
10
+ backbone:
11
+ # [from, number, module, args]
12
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
13
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
14
+ [-1, 3, C3, [128]],
15
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
16
+ [-1, 6, C3, [256]],
17
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
18
+ [-1, 9, C3, [512]],
19
+ [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
20
+ [-1, 3, C3, [768]],
21
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
22
+ [-1, 3, C3, [1024]],
23
+ [-1, 1, Conv, [1280, 3, 2]], # 11-P7/128
24
+ [-1, 3, C3, [1280]],
25
+ [-1, 1, SPPF, [1280, 5]], # 13
26
+ ]
27
+
28
+ # YOLOv5 v6.0 head with (P3, P4, P5, P6, P7) outputs
29
+ head:
30
+ [[-1, 1, Conv, [1024, 1, 1]],
31
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
32
+ [[-1, 10], 1, Concat, [1]], # cat backbone P6
33
+ [-1, 3, C3, [1024, False]], # 17
34
+
35
+ [-1, 1, Conv, [768, 1, 1]],
36
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
37
+ [[-1, 8], 1, Concat, [1]], # cat backbone P5
38
+ [-1, 3, C3, [768, False]], # 21
39
+
40
+ [-1, 1, Conv, [512, 1, 1]],
41
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
42
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
43
+ [-1, 3, C3, [512, False]], # 25
44
+
45
+ [-1, 1, Conv, [256, 1, 1]],
46
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
47
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
48
+ [-1, 3, C3, [256, False]], # 29 (P3/8-small)
49
+
50
+ [-1, 1, Conv, [256, 3, 2]],
51
+ [[-1, 26], 1, Concat, [1]], # cat head P4
52
+ [-1, 3, C3, [512, False]], # 32 (P4/16-medium)
53
+
54
+ [-1, 1, Conv, [512, 3, 2]],
55
+ [[-1, 22], 1, Concat, [1]], # cat head P5
56
+ [-1, 3, C3, [768, False]], # 35 (P5/32-large)
57
+
58
+ [-1, 1, Conv, [768, 3, 2]],
59
+ [[-1, 18], 1, Concat, [1]], # cat head P6
60
+ [-1, 3, C3, [1024, False]], # 38 (P6/64-xlarge)
61
+
62
+ [-1, 1, Conv, [1024, 3, 2]],
63
+ [[-1, 14], 1, Concat, [1]], # cat head P7
64
+ [-1, 3, C3, [1280, False]], # 41 (P7/128-xxlarge)
65
+
66
+ [[29, 32, 35, 38, 41], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6, P7)
67
+ ]
models/hub/yolov5-panet.yaml ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 1.0 # model depth multiple
6
+ width_multiple: 1.0 # layer channel multiple
7
+ anchors:
8
+ - [10,13, 16,30, 33,23] # P3/8
9
+ - [30,61, 62,45, 59,119] # P4/16
10
+ - [116,90, 156,198, 373,326] # P5/32
11
+
12
+ # YOLOv5 v6.0 backbone
13
+ backbone:
14
+ # [from, number, module, args]
15
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
16
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
17
+ [-1, 3, C3, [128]],
18
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
19
+ [-1, 6, C3, [256]],
20
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
21
+ [-1, 9, C3, [512]],
22
+ [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
23
+ [-1, 3, C3, [1024]],
24
+ [-1, 1, SPPF, [1024, 5]], # 9
25
+ ]
26
+
27
+ # YOLOv5 v6.0 PANet head
28
+ head:
29
+ [[-1, 1, Conv, [512, 1, 1]],
30
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
31
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
32
+ [-1, 3, C3, [512, False]], # 13
33
+
34
+ [-1, 1, Conv, [256, 1, 1]],
35
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
36
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
37
+ [-1, 3, C3, [256, False]], # 17 (P3/8-small)
38
+
39
+ [-1, 1, Conv, [256, 3, 2]],
40
+ [[-1, 14], 1, Concat, [1]], # cat head P4
41
+ [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
42
+
43
+ [-1, 1, Conv, [512, 3, 2]],
44
+ [[-1, 10], 1, Concat, [1]], # cat head P5
45
+ [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
46
+
47
+ [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
48
+ ]
models/hub/yolov5l6.yaml ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 1.0 # model depth multiple
6
+ width_multiple: 1.0 # layer channel multiple
7
+ anchors:
8
+ - [19,27, 44,40, 38,94] # P3/8
9
+ - [96,68, 86,152, 180,137] # P4/16
10
+ - [140,301, 303,264, 238,542] # P5/32
11
+ - [436,615, 739,380, 925,792] # P6/64
12
+
13
+ # YOLOv5 v6.0 backbone
14
+ backbone:
15
+ # [from, number, module, args]
16
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
17
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
18
+ [-1, 3, C3, [128]],
19
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
20
+ [-1, 6, C3, [256]],
21
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
22
+ [-1, 9, C3, [512]],
23
+ [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
24
+ [-1, 3, C3, [768]],
25
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
26
+ [-1, 3, C3, [1024]],
27
+ [-1, 1, SPPF, [1024, 5]], # 11
28
+ ]
29
+
30
+ # YOLOv5 v6.0 head
31
+ head:
32
+ [[-1, 1, Conv, [768, 1, 1]],
33
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
34
+ [[-1, 8], 1, Concat, [1]], # cat backbone P5
35
+ [-1, 3, C3, [768, False]], # 15
36
+
37
+ [-1, 1, Conv, [512, 1, 1]],
38
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
39
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
40
+ [-1, 3, C3, [512, False]], # 19
41
+
42
+ [-1, 1, Conv, [256, 1, 1]],
43
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
44
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
45
+ [-1, 3, C3, [256, False]], # 23 (P3/8-small)
46
+
47
+ [-1, 1, Conv, [256, 3, 2]],
48
+ [[-1, 20], 1, Concat, [1]], # cat head P4
49
+ [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
50
+
51
+ [-1, 1, Conv, [512, 3, 2]],
52
+ [[-1, 16], 1, Concat, [1]], # cat head P5
53
+ [-1, 3, C3, [768, False]], # 29 (P5/32-large)
54
+
55
+ [-1, 1, Conv, [768, 3, 2]],
56
+ [[-1, 12], 1, Concat, [1]], # cat head P6
57
+ [-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
58
+
59
+ [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
60
+ ]
models/hub/yolov5m6.yaml ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 0.67 # model depth multiple
6
+ width_multiple: 0.75 # layer channel multiple
7
+ anchors:
8
+ - [19,27, 44,40, 38,94] # P3/8
9
+ - [96,68, 86,152, 180,137] # P4/16
10
+ - [140,301, 303,264, 238,542] # P5/32
11
+ - [436,615, 739,380, 925,792] # P6/64
12
+
13
+ # YOLOv5 v6.0 backbone
14
+ backbone:
15
+ # [from, number, module, args]
16
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
17
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
18
+ [-1, 3, C3, [128]],
19
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
20
+ [-1, 6, C3, [256]],
21
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
22
+ [-1, 9, C3, [512]],
23
+ [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
24
+ [-1, 3, C3, [768]],
25
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
26
+ [-1, 3, C3, [1024]],
27
+ [-1, 1, SPPF, [1024, 5]], # 11
28
+ ]
29
+
30
+ # YOLOv5 v6.0 head
31
+ head:
32
+ [[-1, 1, Conv, [768, 1, 1]],
33
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
34
+ [[-1, 8], 1, Concat, [1]], # cat backbone P5
35
+ [-1, 3, C3, [768, False]], # 15
36
+
37
+ [-1, 1, Conv, [512, 1, 1]],
38
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
39
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
40
+ [-1, 3, C3, [512, False]], # 19
41
+
42
+ [-1, 1, Conv, [256, 1, 1]],
43
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
44
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
45
+ [-1, 3, C3, [256, False]], # 23 (P3/8-small)
46
+
47
+ [-1, 1, Conv, [256, 3, 2]],
48
+ [[-1, 20], 1, Concat, [1]], # cat head P4
49
+ [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
50
+
51
+ [-1, 1, Conv, [512, 3, 2]],
52
+ [[-1, 16], 1, Concat, [1]], # cat head P5
53
+ [-1, 3, C3, [768, False]], # 29 (P5/32-large)
54
+
55
+ [-1, 1, Conv, [768, 3, 2]],
56
+ [[-1, 12], 1, Concat, [1]], # cat head P6
57
+ [-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
58
+
59
+ [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
60
+ ]
models/hub/yolov5n6.yaml ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 0.33 # model depth multiple
6
+ width_multiple: 0.25 # layer channel multiple
7
+ anchors:
8
+ - [19,27, 44,40, 38,94] # P3/8
9
+ - [96,68, 86,152, 180,137] # P4/16
10
+ - [140,301, 303,264, 238,542] # P5/32
11
+ - [436,615, 739,380, 925,792] # P6/64
12
+
13
+ # YOLOv5 v6.0 backbone
14
+ backbone:
15
+ # [from, number, module, args]
16
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
17
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
18
+ [-1, 3, C3, [128]],
19
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
20
+ [-1, 6, C3, [256]],
21
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
22
+ [-1, 9, C3, [512]],
23
+ [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
24
+ [-1, 3, C3, [768]],
25
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
26
+ [-1, 3, C3, [1024]],
27
+ [-1, 1, SPPF, [1024, 5]], # 11
28
+ ]
29
+
30
+ # YOLOv5 v6.0 head
31
+ head:
32
+ [[-1, 1, Conv, [768, 1, 1]],
33
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
34
+ [[-1, 8], 1, Concat, [1]], # cat backbone P5
35
+ [-1, 3, C3, [768, False]], # 15
36
+
37
+ [-1, 1, Conv, [512, 1, 1]],
38
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
39
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
40
+ [-1, 3, C3, [512, False]], # 19
41
+
42
+ [-1, 1, Conv, [256, 1, 1]],
43
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
44
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
45
+ [-1, 3, C3, [256, False]], # 23 (P3/8-small)
46
+
47
+ [-1, 1, Conv, [256, 3, 2]],
48
+ [[-1, 20], 1, Concat, [1]], # cat head P4
49
+ [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
50
+
51
+ [-1, 1, Conv, [512, 3, 2]],
52
+ [[-1, 16], 1, Concat, [1]], # cat head P5
53
+ [-1, 3, C3, [768, False]], # 29 (P5/32-large)
54
+
55
+ [-1, 1, Conv, [768, 3, 2]],
56
+ [[-1, 12], 1, Concat, [1]], # cat head P6
57
+ [-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
58
+
59
+ [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
60
+ ]
models/hub/yolov5s-ghost.yaml ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 0.33 # model depth multiple
6
+ width_multiple: 0.50 # layer channel multiple
7
+ anchors:
8
+ - [10,13, 16,30, 33,23] # P3/8
9
+ - [30,61, 62,45, 59,119] # P4/16
10
+ - [116,90, 156,198, 373,326] # P5/32
11
+
12
+ # YOLOv5 v6.0 backbone
13
+ backbone:
14
+ # [from, number, module, args]
15
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
16
+ [-1, 1, GhostConv, [128, 3, 2]], # 1-P2/4
17
+ [-1, 3, C3Ghost, [128]],
18
+ [-1, 1, GhostConv, [256, 3, 2]], # 3-P3/8
19
+ [-1, 6, C3Ghost, [256]],
20
+ [-1, 1, GhostConv, [512, 3, 2]], # 5-P4/16
21
+ [-1, 9, C3Ghost, [512]],
22
+ [-1, 1, GhostConv, [1024, 3, 2]], # 7-P5/32
23
+ [-1, 3, C3Ghost, [1024]],
24
+ [-1, 1, SPPF, [1024, 5]], # 9
25
+ ]
26
+
27
+ # YOLOv5 v6.0 head
28
+ head:
29
+ [[-1, 1, GhostConv, [512, 1, 1]],
30
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
31
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
32
+ [-1, 3, C3Ghost, [512, False]], # 13
33
+
34
+ [-1, 1, GhostConv, [256, 1, 1]],
35
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
36
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
37
+ [-1, 3, C3Ghost, [256, False]], # 17 (P3/8-small)
38
+
39
+ [-1, 1, GhostConv, [256, 3, 2]],
40
+ [[-1, 14], 1, Concat, [1]], # cat head P4
41
+ [-1, 3, C3Ghost, [512, False]], # 20 (P4/16-medium)
42
+
43
+ [-1, 1, GhostConv, [512, 3, 2]],
44
+ [[-1, 10], 1, Concat, [1]], # cat head P5
45
+ [-1, 3, C3Ghost, [1024, False]], # 23 (P5/32-large)
46
+
47
+ [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
48
+ ]
models/hub/yolov5s-transformer.yaml ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 0.33 # model depth multiple
6
+ width_multiple: 0.50 # layer channel multiple
7
+ anchors:
8
+ - [10,13, 16,30, 33,23] # P3/8
9
+ - [30,61, 62,45, 59,119] # P4/16
10
+ - [116,90, 156,198, 373,326] # P5/32
11
+
12
+ # YOLOv5 v6.0 backbone
13
+ backbone:
14
+ # [from, number, module, args]
15
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
16
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
17
+ [-1, 3, C3, [128]],
18
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
19
+ [-1, 6, C3, [256]],
20
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
21
+ [-1, 9, C3, [512]],
22
+ [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
23
+ [-1, 3, C3TR, [1024]], # 9 <--- C3TR() Transformer module
24
+ [-1, 1, SPPF, [1024, 5]], # 9
25
+ ]
26
+
27
+ # YOLOv5 v6.0 head
28
+ head:
29
+ [[-1, 1, Conv, [512, 1, 1]],
30
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
31
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
32
+ [-1, 3, C3, [512, False]], # 13
33
+
34
+ [-1, 1, Conv, [256, 1, 1]],
35
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
36
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
37
+ [-1, 3, C3, [256, False]], # 17 (P3/8-small)
38
+
39
+ [-1, 1, Conv, [256, 3, 2]],
40
+ [[-1, 14], 1, Concat, [1]], # cat head P4
41
+ [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
42
+
43
+ [-1, 1, Conv, [512, 3, 2]],
44
+ [[-1, 10], 1, Concat, [1]], # cat head P5
45
+ [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
46
+
47
+ [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
48
+ ]
models/hub/yolov5s6.yaml ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 0.33 # model depth multiple
6
+ width_multiple: 0.50 # layer channel multiple
7
+ anchors:
8
+ - [19,27, 44,40, 38,94] # P3/8
9
+ - [96,68, 86,152, 180,137] # P4/16
10
+ - [140,301, 303,264, 238,542] # P5/32
11
+ - [436,615, 739,380, 925,792] # P6/64
12
+
13
+ # YOLOv5 v6.0 backbone
14
+ backbone:
15
+ # [from, number, module, args]
16
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
17
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
18
+ [-1, 3, C3, [128]],
19
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
20
+ [-1, 6, C3, [256]],
21
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
22
+ [-1, 9, C3, [512]],
23
+ [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
24
+ [-1, 3, C3, [768]],
25
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
26
+ [-1, 3, C3, [1024]],
27
+ [-1, 1, SPPF, [1024, 5]], # 11
28
+ ]
29
+
30
+ # YOLOv5 v6.0 head
31
+ head:
32
+ [[-1, 1, Conv, [768, 1, 1]],
33
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
34
+ [[-1, 8], 1, Concat, [1]], # cat backbone P5
35
+ [-1, 3, C3, [768, False]], # 15
36
+
37
+ [-1, 1, Conv, [512, 1, 1]],
38
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
39
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
40
+ [-1, 3, C3, [512, False]], # 19
41
+
42
+ [-1, 1, Conv, [256, 1, 1]],
43
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
44
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
45
+ [-1, 3, C3, [256, False]], # 23 (P3/8-small)
46
+
47
+ [-1, 1, Conv, [256, 3, 2]],
48
+ [[-1, 20], 1, Concat, [1]], # cat head P4
49
+ [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
50
+
51
+ [-1, 1, Conv, [512, 3, 2]],
52
+ [[-1, 16], 1, Concat, [1]], # cat head P5
53
+ [-1, 3, C3, [768, False]], # 29 (P5/32-large)
54
+
55
+ [-1, 1, Conv, [768, 3, 2]],
56
+ [[-1, 12], 1, Concat, [1]], # cat head P6
57
+ [-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
58
+
59
+ [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
60
+ ]
models/hub/yolov5x6.yaml ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 1.33 # model depth multiple
6
+ width_multiple: 1.25 # layer channel multiple
7
+ anchors:
8
+ - [19,27, 44,40, 38,94] # P3/8
9
+ - [96,68, 86,152, 180,137] # P4/16
10
+ - [140,301, 303,264, 238,542] # P5/32
11
+ - [436,615, 739,380, 925,792] # P6/64
12
+
13
+ # YOLOv5 v6.0 backbone
14
+ backbone:
15
+ # [from, number, module, args]
16
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
17
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
18
+ [-1, 3, C3, [128]],
19
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
20
+ [-1, 6, C3, [256]],
21
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
22
+ [-1, 9, C3, [512]],
23
+ [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
24
+ [-1, 3, C3, [768]],
25
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
26
+ [-1, 3, C3, [1024]],
27
+ [-1, 1, SPPF, [1024, 5]], # 11
28
+ ]
29
+
30
+ # YOLOv5 v6.0 head
31
+ head:
32
+ [[-1, 1, Conv, [768, 1, 1]],
33
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
34
+ [[-1, 8], 1, Concat, [1]], # cat backbone P5
35
+ [-1, 3, C3, [768, False]], # 15
36
+
37
+ [-1, 1, Conv, [512, 1, 1]],
38
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
39
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
40
+ [-1, 3, C3, [512, False]], # 19
41
+
42
+ [-1, 1, Conv, [256, 1, 1]],
43
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
44
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
45
+ [-1, 3, C3, [256, False]], # 23 (P3/8-small)
46
+
47
+ [-1, 1, Conv, [256, 3, 2]],
48
+ [[-1, 20], 1, Concat, [1]], # cat head P4
49
+ [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
50
+
51
+ [-1, 1, Conv, [512, 3, 2]],
52
+ [[-1, 16], 1, Concat, [1]], # cat head P5
53
+ [-1, 3, C3, [768, False]], # 29 (P5/32-large)
54
+
55
+ [-1, 1, Conv, [768, 3, 2]],
56
+ [[-1, 12], 1, Concat, [1]], # cat head P6
57
+ [-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
58
+
59
+ [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
60
+ ]
models/tf.py ADDED
@@ -0,0 +1,574 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ """
3
+ TensorFlow, Keras and TFLite versions of YOLOv5
4
+ Authored by https://github.com/zldrobit in PR https://github.com/ultralytics/yolov5/pull/1127
5
+
6
+ Usage:
7
+ $ python models/tf.py --weights yolov5s.pt
8
+
9
+ Export:
10
+ $ python path/to/export.py --weights yolov5s.pt --include saved_model pb tflite tfjs
11
+ """
12
+
13
+ import argparse
14
+ import sys
15
+ from copy import deepcopy
16
+ from pathlib import Path
17
+
18
+ FILE = Path(__file__).resolve()
19
+ ROOT = FILE.parents[1] # YOLOv5 root directory
20
+ if str(ROOT) not in sys.path:
21
+ sys.path.append(str(ROOT)) # add ROOT to PATH
22
+ # ROOT = ROOT.relative_to(Path.cwd()) # relative
23
+
24
+ import numpy as np
25
+ import tensorflow as tf
26
+ import torch
27
+ import torch.nn as nn
28
+ from tensorflow import keras
29
+
30
+ from models.common import (C3, SPP, SPPF, Bottleneck, BottleneckCSP, C3x, Concat, Conv, CrossConv, DWConv,
31
+ DWConvTranspose2d, Focus, autopad)
32
+ from models.experimental import MixConv2d, attempt_load
33
+ from models.yolo import Detect
34
+ from utils.activations import SiLU
35
+ from utils.general import LOGGER, make_divisible, print_args
36
+
37
+
38
+ class TFBN(keras.layers.Layer):
39
+ # TensorFlow BatchNormalization wrapper
40
+ def __init__(self, w=None):
41
+ super().__init__()
42
+ self.bn = keras.layers.BatchNormalization(
43
+ beta_initializer=keras.initializers.Constant(w.bias.numpy()),
44
+ gamma_initializer=keras.initializers.Constant(w.weight.numpy()),
45
+ moving_mean_initializer=keras.initializers.Constant(w.running_mean.numpy()),
46
+ moving_variance_initializer=keras.initializers.Constant(w.running_var.numpy()),
47
+ epsilon=w.eps)
48
+
49
+ def call(self, inputs):
50
+ return self.bn(inputs)
51
+
52
+
53
+ class TFPad(keras.layers.Layer):
54
+ # Pad inputs in spatial dimensions 1 and 2
55
+ def __init__(self, pad):
56
+ super().__init__()
57
+ if isinstance(pad, int):
58
+ self.pad = tf.constant([[0, 0], [pad, pad], [pad, pad], [0, 0]])
59
+ else: # tuple/list
60
+ self.pad = tf.constant([[0, 0], [pad[0], pad[0]], [pad[1], pad[1]], [0, 0]])
61
+
62
+ def call(self, inputs):
63
+ return tf.pad(inputs, self.pad, mode='constant', constant_values=0)
64
+
65
+
66
+ class TFConv(keras.layers.Layer):
67
+ # Standard convolution
68
+ def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None):
69
+ # ch_in, ch_out, weights, kernel, stride, padding, groups
70
+ super().__init__()
71
+ assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument"
72
+ # TensorFlow convolution padding is inconsistent with PyTorch (e.g. k=3 s=2 'SAME' padding)
73
+ # see https://stackoverflow.com/questions/52975843/comparing-conv2d-with-padding-between-tensorflow-and-pytorch
74
+ conv = keras.layers.Conv2D(
75
+ filters=c2,
76
+ kernel_size=k,
77
+ strides=s,
78
+ padding='SAME' if s == 1 else 'VALID',
79
+ use_bias=not hasattr(w, 'bn'),
80
+ kernel_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()),
81
+ bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy()))
82
+ self.conv = conv if s == 1 else keras.Sequential([TFPad(autopad(k, p)), conv])
83
+ self.bn = TFBN(w.bn) if hasattr(w, 'bn') else tf.identity
84
+ self.act = activations(w.act) if act else tf.identity
85
+
86
+ def call(self, inputs):
87
+ return self.act(self.bn(self.conv(inputs)))
88
+
89
+
90
+ class TFDWConv(keras.layers.Layer):
91
+ # Depthwise convolution
92
+ def __init__(self, c1, c2, k=1, s=1, p=None, act=True, w=None):
93
+ # ch_in, ch_out, weights, kernel, stride, padding, groups
94
+ super().__init__()
95
+ assert c2 % c1 == 0, f'TFDWConv() output={c2} must be a multiple of input={c1} channels'
96
+ conv = keras.layers.DepthwiseConv2D(
97
+ kernel_size=k,
98
+ depth_multiplier=c2 // c1,
99
+ strides=s,
100
+ padding='SAME' if s == 1 else 'VALID',
101
+ use_bias=not hasattr(w, 'bn'),
102
+ depthwise_initializer=keras.initializers.Constant(w.conv.weight.permute(2, 3, 1, 0).numpy()),
103
+ bias_initializer='zeros' if hasattr(w, 'bn') else keras.initializers.Constant(w.conv.bias.numpy()))
104
+ self.conv = conv if s == 1 else keras.Sequential([TFPad(autopad(k, p)), conv])
105
+ self.bn = TFBN(w.bn) if hasattr(w, 'bn') else tf.identity
106
+ self.act = activations(w.act) if act else tf.identity
107
+
108
+ def call(self, inputs):
109
+ return self.act(self.bn(self.conv(inputs)))
110
+
111
+
112
+ class TFDWConvTranspose2d(keras.layers.Layer):
113
+ # Depthwise ConvTranspose2d
114
+ def __init__(self, c1, c2, k=1, s=1, p1=0, p2=0, w=None):
115
+ # ch_in, ch_out, weights, kernel, stride, padding, groups
116
+ super().__init__()
117
+ assert c1 == c2, f'TFDWConv() output={c2} must be equal to input={c1} channels'
118
+ assert k == 4 and p1 == 1, 'TFDWConv() only valid for k=4 and p1=1'
119
+ weight, bias = w.weight.permute(2, 3, 1, 0).numpy(), w.bias.numpy()
120
+ self.c1 = c1
121
+ self.conv = [
122
+ keras.layers.Conv2DTranspose(filters=1,
123
+ kernel_size=k,
124
+ strides=s,
125
+ padding='VALID',
126
+ output_padding=p2,
127
+ use_bias=True,
128
+ kernel_initializer=keras.initializers.Constant(weight[..., i:i + 1]),
129
+ bias_initializer=keras.initializers.Constant(bias[i])) for i in range(c1)]
130
+
131
+ def call(self, inputs):
132
+ return tf.concat([m(x) for m, x in zip(self.conv, tf.split(inputs, self.c1, 3))], 3)[:, 1:-1, 1:-1]
133
+
134
+
135
+ class TFFocus(keras.layers.Layer):
136
+ # Focus wh information into c-space
137
+ def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True, w=None):
138
+ # ch_in, ch_out, kernel, stride, padding, groups
139
+ super().__init__()
140
+ self.conv = TFConv(c1 * 4, c2, k, s, p, g, act, w.conv)
141
+
142
+ def call(self, inputs): # x(b,w,h,c) -> y(b,w/2,h/2,4c)
143
+ # inputs = inputs / 255 # normalize 0-255 to 0-1
144
+ inputs = [inputs[:, ::2, ::2, :], inputs[:, 1::2, ::2, :], inputs[:, ::2, 1::2, :], inputs[:, 1::2, 1::2, :]]
145
+ return self.conv(tf.concat(inputs, 3))
146
+
147
+
148
+ class TFBottleneck(keras.layers.Layer):
149
+ # Standard bottleneck
150
+ def __init__(self, c1, c2, shortcut=True, g=1, e=0.5, w=None): # ch_in, ch_out, shortcut, groups, expansion
151
+ super().__init__()
152
+ c_ = int(c2 * e) # hidden channels
153
+ self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
154
+ self.cv2 = TFConv(c_, c2, 3, 1, g=g, w=w.cv2)
155
+ self.add = shortcut and c1 == c2
156
+
157
+ def call(self, inputs):
158
+ return inputs + self.cv2(self.cv1(inputs)) if self.add else self.cv2(self.cv1(inputs))
159
+
160
+
161
+ class TFCrossConv(keras.layers.Layer):
162
+ # Cross Convolution
163
+ def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False, w=None):
164
+ super().__init__()
165
+ c_ = int(c2 * e) # hidden channels
166
+ self.cv1 = TFConv(c1, c_, (1, k), (1, s), w=w.cv1)
167
+ self.cv2 = TFConv(c_, c2, (k, 1), (s, 1), g=g, w=w.cv2)
168
+ self.add = shortcut and c1 == c2
169
+
170
+ def call(self, inputs):
171
+ return inputs + self.cv2(self.cv1(inputs)) if self.add else self.cv2(self.cv1(inputs))
172
+
173
+
174
+ class TFConv2d(keras.layers.Layer):
175
+ # Substitution for PyTorch nn.Conv2D
176
+ def __init__(self, c1, c2, k, s=1, g=1, bias=True, w=None):
177
+ super().__init__()
178
+ assert g == 1, "TF v2.2 Conv2D does not support 'groups' argument"
179
+ self.conv = keras.layers.Conv2D(filters=c2,
180
+ kernel_size=k,
181
+ strides=s,
182
+ padding='VALID',
183
+ use_bias=bias,
184
+ kernel_initializer=keras.initializers.Constant(
185
+ w.weight.permute(2, 3, 1, 0).numpy()),
186
+ bias_initializer=keras.initializers.Constant(w.bias.numpy()) if bias else None)
187
+
188
+ def call(self, inputs):
189
+ return self.conv(inputs)
190
+
191
+
192
+ class TFBottleneckCSP(keras.layers.Layer):
193
+ # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
194
+ def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
195
+ # ch_in, ch_out, number, shortcut, groups, expansion
196
+ super().__init__()
197
+ c_ = int(c2 * e) # hidden channels
198
+ self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
199
+ self.cv2 = TFConv2d(c1, c_, 1, 1, bias=False, w=w.cv2)
200
+ self.cv3 = TFConv2d(c_, c_, 1, 1, bias=False, w=w.cv3)
201
+ self.cv4 = TFConv(2 * c_, c2, 1, 1, w=w.cv4)
202
+ self.bn = TFBN(w.bn)
203
+ self.act = lambda x: keras.activations.swish(x)
204
+ self.m = keras.Sequential([TFBottleneck(c_, c_, shortcut, g, e=1.0, w=w.m[j]) for j in range(n)])
205
+
206
+ def call(self, inputs):
207
+ y1 = self.cv3(self.m(self.cv1(inputs)))
208
+ y2 = self.cv2(inputs)
209
+ return self.cv4(self.act(self.bn(tf.concat((y1, y2), axis=3))))
210
+
211
+
212
+ class TFC3(keras.layers.Layer):
213
+ # CSP Bottleneck with 3 convolutions
214
+ def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
215
+ # ch_in, ch_out, number, shortcut, groups, expansion
216
+ super().__init__()
217
+ c_ = int(c2 * e) # hidden channels
218
+ self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
219
+ self.cv2 = TFConv(c1, c_, 1, 1, w=w.cv2)
220
+ self.cv3 = TFConv(2 * c_, c2, 1, 1, w=w.cv3)
221
+ self.m = keras.Sequential([TFBottleneck(c_, c_, shortcut, g, e=1.0, w=w.m[j]) for j in range(n)])
222
+
223
+ def call(self, inputs):
224
+ return self.cv3(tf.concat((self.m(self.cv1(inputs)), self.cv2(inputs)), axis=3))
225
+
226
+
227
+ class TFC3x(keras.layers.Layer):
228
+ # 3 module with cross-convolutions
229
+ def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, w=None):
230
+ # ch_in, ch_out, number, shortcut, groups, expansion
231
+ super().__init__()
232
+ c_ = int(c2 * e) # hidden channels
233
+ self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
234
+ self.cv2 = TFConv(c1, c_, 1, 1, w=w.cv2)
235
+ self.cv3 = TFConv(2 * c_, c2, 1, 1, w=w.cv3)
236
+ self.m = keras.Sequential([
237
+ TFCrossConv(c_, c_, k=3, s=1, g=g, e=1.0, shortcut=shortcut, w=w.m[j]) for j in range(n)])
238
+
239
+ def call(self, inputs):
240
+ return self.cv3(tf.concat((self.m(self.cv1(inputs)), self.cv2(inputs)), axis=3))
241
+
242
+
243
+ class TFSPP(keras.layers.Layer):
244
+ # Spatial pyramid pooling layer used in YOLOv3-SPP
245
+ def __init__(self, c1, c2, k=(5, 9, 13), w=None):
246
+ super().__init__()
247
+ c_ = c1 // 2 # hidden channels
248
+ self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
249
+ self.cv2 = TFConv(c_ * (len(k) + 1), c2, 1, 1, w=w.cv2)
250
+ self.m = [keras.layers.MaxPool2D(pool_size=x, strides=1, padding='SAME') for x in k]
251
+
252
+ def call(self, inputs):
253
+ x = self.cv1(inputs)
254
+ return self.cv2(tf.concat([x] + [m(x) for m in self.m], 3))
255
+
256
+
257
+ class TFSPPF(keras.layers.Layer):
258
+ # Spatial pyramid pooling-Fast layer
259
+ def __init__(self, c1, c2, k=5, w=None):
260
+ super().__init__()
261
+ c_ = c1 // 2 # hidden channels
262
+ self.cv1 = TFConv(c1, c_, 1, 1, w=w.cv1)
263
+ self.cv2 = TFConv(c_ * 4, c2, 1, 1, w=w.cv2)
264
+ self.m = keras.layers.MaxPool2D(pool_size=k, strides=1, padding='SAME')
265
+
266
+ def call(self, inputs):
267
+ x = self.cv1(inputs)
268
+ y1 = self.m(x)
269
+ y2 = self.m(y1)
270
+ return self.cv2(tf.concat([x, y1, y2, self.m(y2)], 3))
271
+
272
+
273
+ class TFDetect(keras.layers.Layer):
274
+ # TF YOLOv5 Detect layer
275
+ def __init__(self, nc=80, anchors=(), ch=(), imgsz=(640, 640), w=None): # detection layer
276
+ super().__init__()
277
+ self.stride = tf.convert_to_tensor(w.stride.numpy(), dtype=tf.float32)
278
+ self.nc = nc # number of classes
279
+ self.no = nc + 5 # number of outputs per anchor
280
+ self.nl = len(anchors) # number of detection layers
281
+ self.na = len(anchors[0]) // 2 # number of anchors
282
+ self.grid = [tf.zeros(1)] * self.nl # init grid
283
+ self.anchors = tf.convert_to_tensor(w.anchors.numpy(), dtype=tf.float32)
284
+ self.anchor_grid = tf.reshape(self.anchors * tf.reshape(self.stride, [self.nl, 1, 1]), [self.nl, 1, -1, 1, 2])
285
+ self.m = [TFConv2d(x, self.no * self.na, 1, w=w.m[i]) for i, x in enumerate(ch)]
286
+ self.training = False # set to False after building model
287
+ self.imgsz = imgsz
288
+ for i in range(self.nl):
289
+ ny, nx = self.imgsz[0] // self.stride[i], self.imgsz[1] // self.stride[i]
290
+ self.grid[i] = self._make_grid(nx, ny)
291
+
292
+ def call(self, inputs):
293
+ z = [] # inference output
294
+ x = []
295
+ for i in range(self.nl):
296
+ x.append(self.m[i](inputs[i]))
297
+ # x(bs,20,20,255) to x(bs,3,20,20,85)
298
+ ny, nx = self.imgsz[0] // self.stride[i], self.imgsz[1] // self.stride[i]
299
+ x[i] = tf.reshape(x[i], [-1, ny * nx, self.na, self.no])
300
+
301
+ if not self.training: # inference
302
+ y = tf.sigmoid(x[i])
303
+ grid = tf.transpose(self.grid[i], [0, 2, 1, 3]) - 0.5
304
+ anchor_grid = tf.transpose(self.anchor_grid[i], [0, 2, 1, 3]) * 4
305
+ xy = (y[..., 0:2] * 2 + grid) * self.stride[i] # xy
306
+ wh = y[..., 2:4] ** 2 * anchor_grid
307
+ # Normalize xywh to 0-1 to reduce calibration error
308
+ xy /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32)
309
+ wh /= tf.constant([[self.imgsz[1], self.imgsz[0]]], dtype=tf.float32)
310
+ y = tf.concat([xy, wh, y[..., 4:]], -1)
311
+ z.append(tf.reshape(y, [-1, self.na * ny * nx, self.no]))
312
+
313
+ return tf.transpose(x, [0, 2, 1, 3]) if self.training else (tf.concat(z, 1), x)
314
+
315
+ @staticmethod
316
+ def _make_grid(nx=20, ny=20):
317
+ # yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
318
+ # return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
319
+ xv, yv = tf.meshgrid(tf.range(nx), tf.range(ny))
320
+ return tf.cast(tf.reshape(tf.stack([xv, yv], 2), [1, 1, ny * nx, 2]), dtype=tf.float32)
321
+
322
+
323
+ class TFUpsample(keras.layers.Layer):
324
+ # TF version of torch.nn.Upsample()
325
+ def __init__(self, size, scale_factor, mode, w=None): # warning: all arguments needed including 'w'
326
+ super().__init__()
327
+ assert scale_factor == 2, "scale_factor must be 2"
328
+ self.upsample = lambda x: tf.image.resize(x, (x.shape[1] * 2, x.shape[2] * 2), method=mode)
329
+ # self.upsample = keras.layers.UpSampling2D(size=scale_factor, interpolation=mode)
330
+ # with default arguments: align_corners=False, half_pixel_centers=False
331
+ # self.upsample = lambda x: tf.raw_ops.ResizeNearestNeighbor(images=x,
332
+ # size=(x.shape[1] * 2, x.shape[2] * 2))
333
+
334
+ def call(self, inputs):
335
+ return self.upsample(inputs)
336
+
337
+
338
+ class TFConcat(keras.layers.Layer):
339
+ # TF version of torch.concat()
340
+ def __init__(self, dimension=1, w=None):
341
+ super().__init__()
342
+ assert dimension == 1, "convert only NCHW to NHWC concat"
343
+ self.d = 3
344
+
345
+ def call(self, inputs):
346
+ return tf.concat(inputs, self.d)
347
+
348
+
349
+ def parse_model(d, ch, model, imgsz): # model_dict, input_channels(3)
350
+ LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}")
351
+ anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
352
+ na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors
353
+ no = na * (nc + 5) # number of outputs = anchors * (classes + 5)
354
+
355
+ layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out
356
+ for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
357
+ m_str = m
358
+ m = eval(m) if isinstance(m, str) else m # eval strings
359
+ for j, a in enumerate(args):
360
+ try:
361
+ args[j] = eval(a) if isinstance(a, str) else a # eval strings
362
+ except NameError:
363
+ pass
364
+
365
+ n = max(round(n * gd), 1) if n > 1 else n # depth gain
366
+ if m in [
367
+ nn.Conv2d, Conv, DWConv, DWConvTranspose2d, Bottleneck, SPP, SPPF, MixConv2d, Focus, CrossConv,
368
+ BottleneckCSP, C3, C3x]:
369
+ c1, c2 = ch[f], args[0]
370
+ c2 = make_divisible(c2 * gw, 8) if c2 != no else c2
371
+
372
+ args = [c1, c2, *args[1:]]
373
+ if m in [BottleneckCSP, C3, C3x]:
374
+ args.insert(2, n)
375
+ n = 1
376
+ elif m is nn.BatchNorm2d:
377
+ args = [ch[f]]
378
+ elif m is Concat:
379
+ c2 = sum(ch[-1 if x == -1 else x + 1] for x in f)
380
+ elif m is Detect:
381
+ args.append([ch[x + 1] for x in f])
382
+ if isinstance(args[1], int): # number of anchors
383
+ args[1] = [list(range(args[1] * 2))] * len(f)
384
+ args.append(imgsz)
385
+ else:
386
+ c2 = ch[f]
387
+
388
+ tf_m = eval('TF' + m_str.replace('nn.', ''))
389
+ m_ = keras.Sequential([tf_m(*args, w=model.model[i][j]) for j in range(n)]) if n > 1 \
390
+ else tf_m(*args, w=model.model[i]) # module
391
+
392
+ torch_m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module
393
+ t = str(m)[8:-2].replace('__main__.', '') # module type
394
+ np = sum(x.numel() for x in torch_m_.parameters()) # number params
395
+ m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params
396
+ LOGGER.info(f'{i:>3}{str(f):>18}{str(n):>3}{np:>10} {t:<40}{str(args):<30}') # print
397
+ save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
398
+ layers.append(m_)
399
+ ch.append(c2)
400
+ return keras.Sequential(layers), sorted(save)
401
+
402
+
403
+ class TFModel:
404
+ # TF YOLOv5 model
405
+ def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, model=None, imgsz=(640, 640)): # model, channels, classes
406
+ super().__init__()
407
+ if isinstance(cfg, dict):
408
+ self.yaml = cfg # model dict
409
+ else: # is *.yaml
410
+ import yaml # for torch hub
411
+ self.yaml_file = Path(cfg).name
412
+ with open(cfg) as f:
413
+ self.yaml = yaml.load(f, Loader=yaml.FullLoader) # model dict
414
+
415
+ # Define model
416
+ if nc and nc != self.yaml['nc']:
417
+ LOGGER.info(f"Overriding {cfg} nc={self.yaml['nc']} with nc={nc}")
418
+ self.yaml['nc'] = nc # override yaml value
419
+ self.model, self.savelist = parse_model(deepcopy(self.yaml), ch=[ch], model=model, imgsz=imgsz)
420
+
421
+ def predict(self,
422
+ inputs,
423
+ tf_nms=False,
424
+ agnostic_nms=False,
425
+ topk_per_class=100,
426
+ topk_all=100,
427
+ iou_thres=0.45,
428
+ conf_thres=0.25):
429
+ y = [] # outputs
430
+ x = inputs
431
+ for m in self.model.layers:
432
+ if m.f != -1: # if not from previous layer
433
+ x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
434
+
435
+ x = m(x) # run
436
+ y.append(x if m.i in self.savelist else None) # save output
437
+
438
+ # Add TensorFlow NMS
439
+ if tf_nms:
440
+ boxes = self._xywh2xyxy(x[0][..., :4])
441
+ probs = x[0][:, :, 4:5]
442
+ classes = x[0][:, :, 5:]
443
+ scores = probs * classes
444
+ if agnostic_nms:
445
+ nms = AgnosticNMS()((boxes, classes, scores), topk_all, iou_thres, conf_thres)
446
+ else:
447
+ boxes = tf.expand_dims(boxes, 2)
448
+ nms = tf.image.combined_non_max_suppression(boxes,
449
+ scores,
450
+ topk_per_class,
451
+ topk_all,
452
+ iou_thres,
453
+ conf_thres,
454
+ clip_boxes=False)
455
+ return nms, x[1]
456
+ return x[0] # output only first tensor [1,6300,85] = [xywh, conf, class0, class1, ...]
457
+ # x = x[0][0] # [x(1,6300,85), ...] to x(6300,85)
458
+ # xywh = x[..., :4] # x(6300,4) boxes
459
+ # conf = x[..., 4:5] # x(6300,1) confidences
460
+ # cls = tf.reshape(tf.cast(tf.argmax(x[..., 5:], axis=1), tf.float32), (-1, 1)) # x(6300,1) classes
461
+ # return tf.concat([conf, cls, xywh], 1)
462
+
463
+ @staticmethod
464
+ def _xywh2xyxy(xywh):
465
+ # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
466
+ x, y, w, h = tf.split(xywh, num_or_size_splits=4, axis=-1)
467
+ return tf.concat([x - w / 2, y - h / 2, x + w / 2, y + h / 2], axis=-1)
468
+
469
+
470
+ class AgnosticNMS(keras.layers.Layer):
471
+ # TF Agnostic NMS
472
+ def call(self, input, topk_all, iou_thres, conf_thres):
473
+ # wrap map_fn to avoid TypeSpec related error https://stackoverflow.com/a/65809989/3036450
474
+ return tf.map_fn(lambda x: self._nms(x, topk_all, iou_thres, conf_thres),
475
+ input,
476
+ fn_output_signature=(tf.float32, tf.float32, tf.float32, tf.int32),
477
+ name='agnostic_nms')
478
+
479
+ @staticmethod
480
+ def _nms(x, topk_all=100, iou_thres=0.45, conf_thres=0.25): # agnostic NMS
481
+ boxes, classes, scores = x
482
+ class_inds = tf.cast(tf.argmax(classes, axis=-1), tf.float32)
483
+ scores_inp = tf.reduce_max(scores, -1)
484
+ selected_inds = tf.image.non_max_suppression(boxes,
485
+ scores_inp,
486
+ max_output_size=topk_all,
487
+ iou_threshold=iou_thres,
488
+ score_threshold=conf_thres)
489
+ selected_boxes = tf.gather(boxes, selected_inds)
490
+ padded_boxes = tf.pad(selected_boxes,
491
+ paddings=[[0, topk_all - tf.shape(selected_boxes)[0]], [0, 0]],
492
+ mode="CONSTANT",
493
+ constant_values=0.0)
494
+ selected_scores = tf.gather(scores_inp, selected_inds)
495
+ padded_scores = tf.pad(selected_scores,
496
+ paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]],
497
+ mode="CONSTANT",
498
+ constant_values=-1.0)
499
+ selected_classes = tf.gather(class_inds, selected_inds)
500
+ padded_classes = tf.pad(selected_classes,
501
+ paddings=[[0, topk_all - tf.shape(selected_boxes)[0]]],
502
+ mode="CONSTANT",
503
+ constant_values=-1.0)
504
+ valid_detections = tf.shape(selected_inds)[0]
505
+ return padded_boxes, padded_scores, padded_classes, valid_detections
506
+
507
+
508
+ def activations(act=nn.SiLU):
509
+ # Returns TF activation from input PyTorch activation
510
+ if isinstance(act, nn.LeakyReLU):
511
+ return lambda x: keras.activations.relu(x, alpha=0.1)
512
+ elif isinstance(act, nn.Hardswish):
513
+ return lambda x: x * tf.nn.relu6(x + 3) * 0.166666667
514
+ elif isinstance(act, (nn.SiLU, SiLU)):
515
+ return lambda x: keras.activations.swish(x)
516
+ else:
517
+ raise Exception(f'no matching TensorFlow activation found for PyTorch activation {act}')
518
+
519
+
520
+ def representative_dataset_gen(dataset, ncalib=100):
521
+ # Representative dataset generator for use with converter.representative_dataset, returns a generator of np arrays
522
+ for n, (path, img, im0s, vid_cap, string) in enumerate(dataset):
523
+ im = np.transpose(img, [1, 2, 0])
524
+ im = np.expand_dims(im, axis=0).astype(np.float32)
525
+ im /= 255
526
+ yield [im]
527
+ if n >= ncalib:
528
+ break
529
+
530
+
531
+ def run(
532
+ weights=ROOT / 'yolov5s.pt', # weights path
533
+ imgsz=(640, 640), # inference size h,w
534
+ batch_size=1, # batch size
535
+ dynamic=False, # dynamic batch size
536
+ ):
537
+ # PyTorch model
538
+ im = torch.zeros((batch_size, 3, *imgsz)) # BCHW image
539
+ model = attempt_load(weights, device=torch.device('cpu'), inplace=True, fuse=False)
540
+ _ = model(im) # inference
541
+ model.info()
542
+
543
+ # TensorFlow model
544
+ im = tf.zeros((batch_size, *imgsz, 3)) # BHWC image
545
+ tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz)
546
+ _ = tf_model.predict(im) # inference
547
+
548
+ # Keras model
549
+ im = keras.Input(shape=(*imgsz, 3), batch_size=None if dynamic else batch_size)
550
+ keras_model = keras.Model(inputs=im, outputs=tf_model.predict(im))
551
+ keras_model.summary()
552
+
553
+ LOGGER.info('PyTorch, TensorFlow and Keras models successfully verified.\nUse export.py for TF model export.')
554
+
555
+
556
+ def parse_opt():
557
+ parser = argparse.ArgumentParser()
558
+ parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='weights path')
559
+ parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w')
560
+ parser.add_argument('--batch-size', type=int, default=1, help='batch size')
561
+ parser.add_argument('--dynamic', action='store_true', help='dynamic batch size')
562
+ opt = parser.parse_args()
563
+ opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
564
+ print_args(vars(opt))
565
+ return opt
566
+
567
+
568
+ def main(opt):
569
+ run(**vars(opt))
570
+
571
+
572
+ if __name__ == "__main__":
573
+ opt = parse_opt()
574
+ main(opt)
models/yolo.py ADDED
@@ -0,0 +1,338 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+ """
3
+ YOLO-specific modules
4
+
5
+ Usage:
6
+ $ python path/to/models/yolo.py --cfg yolov5s.yaml
7
+ """
8
+
9
+ import argparse
10
+ import os
11
+ import platform
12
+ import sys
13
+ from copy import deepcopy
14
+ from pathlib import Path
15
+
16
+ FILE = Path(__file__).resolve()
17
+ ROOT = FILE.parents[1] # YOLOv5 root directory
18
+ if str(ROOT) not in sys.path:
19
+ sys.path.append(str(ROOT)) # add ROOT to PATH
20
+ if platform.system() != 'Windows':
21
+ ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
22
+
23
+ from models.common import *
24
+ from models.experimental import *
25
+ from utils.autoanchor import check_anchor_order
26
+ from utils.general import LOGGER, check_version, check_yaml, make_divisible, print_args
27
+ from utils.plots import feature_visualization
28
+ from utils.torch_utils import (fuse_conv_and_bn, initialize_weights, model_info, profile, scale_img, select_device,
29
+ time_sync)
30
+
31
+ try:
32
+ import thop # for FLOPs computation
33
+ except ImportError:
34
+ thop = None
35
+
36
+
37
+ class Detect(nn.Module):
38
+ stride = None # strides computed during build
39
+ onnx_dynamic = False # ONNX export parameter
40
+ export = False # export mode
41
+
42
+ def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer
43
+ super().__init__()
44
+ self.nc = nc # number of classes
45
+ self.no = nc + 5 # number of outputs per anchor
46
+ self.nl = len(anchors) # number of detection layers
47
+ self.na = len(anchors[0]) // 2 # number of anchors
48
+ self.grid = [torch.zeros(1)] * self.nl # init grid
49
+ self.anchor_grid = [torch.zeros(1)] * self.nl # init anchor grid
50
+ self.register_buffer('anchors', torch.tensor(anchors).float().view(self.nl, -1, 2)) # shape(nl,na,2)
51
+ self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
52
+ self.inplace = inplace # use in-place ops (e.g. slice assignment)
53
+
54
+ def forward(self, x):
55
+ z = [] # inference output
56
+ for i in range(self.nl):
57
+ x[i] = self.m[i](x[i]) # conv
58
+ bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
59
+ x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
60
+
61
+ if not self.training: # inference
62
+ if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:
63
+ self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)
64
+
65
+ y = x[i].sigmoid()
66
+ if self.inplace:
67
+ y[..., 0:2] = (y[..., 0:2] * 2 + self.grid[i]) * self.stride[i] # xy
68
+ y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
69
+ else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953
70
+ xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0
71
+ xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy
72
+ wh = (wh * 2) ** 2 * self.anchor_grid[i] # wh
73
+ y = torch.cat((xy, wh, conf), 4)
74
+ z.append(y.view(bs, -1, self.no))
75
+
76
+ return x if self.training else (torch.cat(z, 1),) if self.export else (torch.cat(z, 1), x)
77
+
78
+ def _make_grid(self, nx=20, ny=20, i=0):
79
+ d = self.anchors[i].device
80
+ t = self.anchors[i].dtype
81
+ shape = 1, self.na, ny, nx, 2 # grid shape
82
+ y, x = torch.arange(ny, device=d, dtype=t), torch.arange(nx, device=d, dtype=t)
83
+ if check_version(torch.__version__, '1.10.0'): # torch>=1.10.0 meshgrid workaround for torch>=0.7 compatibility
84
+ yv, xv = torch.meshgrid(y, x, indexing='ij')
85
+ else:
86
+ yv, xv = torch.meshgrid(y, x)
87
+ grid = torch.stack((xv, yv), 2).expand(shape) - 0.5 # add grid offset, i.e. y = 2.0 * x - 0.5
88
+ anchor_grid = (self.anchors[i] * self.stride[i]).view((1, self.na, 1, 1, 2)).expand(shape)
89
+ return grid, anchor_grid
90
+
91
+
92
+ class Model(nn.Module):
93
+ # YOLOv5 model
94
+ def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
95
+ super().__init__()
96
+ if isinstance(cfg, dict):
97
+ self.yaml = cfg # model dict
98
+ else: # is *.yaml
99
+ import yaml # for torch hub
100
+ self.yaml_file = Path(cfg).name
101
+ with open(cfg, encoding='ascii', errors='ignore') as f:
102
+ self.yaml = yaml.safe_load(f) # model dict
103
+
104
+ # Define model
105
+ ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
106
+ if nc and nc != self.yaml['nc']:
107
+ LOGGER.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
108
+ self.yaml['nc'] = nc # override yaml value
109
+ if anchors:
110
+ LOGGER.info(f'Overriding model.yaml anchors with anchors={anchors}')
111
+ self.yaml['anchors'] = round(anchors) # override yaml value
112
+ self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
113
+ self.names = [str(i) for i in range(self.yaml['nc'])] # default names
114
+ self.inplace = self.yaml.get('inplace', True)
115
+
116
+ # Build strides, anchors
117
+ m = self.model[-1] # Detect()
118
+ if isinstance(m, Detect):
119
+ s = 256 # 2x min stride
120
+ m.inplace = self.inplace
121
+ m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
122
+ check_anchor_order(m) # must be in pixel-space (not grid-space)
123
+ m.anchors /= m.stride.view(-1, 1, 1)
124
+ self.stride = m.stride
125
+ self._initialize_biases() # only run once
126
+
127
+ # Init weights, biases
128
+ initialize_weights(self)
129
+ self.info()
130
+ LOGGER.info('')
131
+
132
+ def forward(self, x, augment=False, profile=False, visualize=False):
133
+ if augment:
134
+ return self._forward_augment(x) # augmented inference, None
135
+ return self._forward_once(x, profile, visualize) # single-scale inference, train
136
+
137
+ def _forward_augment(self, x):
138
+ img_size = x.shape[-2:] # height, width
139
+ s = [1, 0.83, 0.67] # scales
140
+ f = [None, 3, None] # flips (2-ud, 3-lr)
141
+ y = [] # outputs
142
+ for si, fi in zip(s, f):
143
+ xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max()))
144
+ yi = self._forward_once(xi)[0] # forward
145
+ # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save
146
+ yi = self._descale_pred(yi, fi, si, img_size)
147
+ y.append(yi)
148
+ y = self._clip_augmented(y) # clip augmented tails
149
+ return torch.cat(y, 1), None # augmented inference, train
150
+
151
+ def _forward_once(self, x, profile=False, visualize=False):
152
+ y, dt = [], [] # outputs
153
+ for m in self.model:
154
+ if m.f != -1: # if not from previous layer
155
+ x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
156
+ if profile:
157
+ self._profile_one_layer(m, x, dt)
158
+ x = m(x) # run
159
+ y.append(x if m.i in self.save else None) # save output
160
+ if visualize:
161
+ feature_visualization(x, m.type, m.i, save_dir=visualize)
162
+ return x
163
+
164
+ def _descale_pred(self, p, flips, scale, img_size):
165
+ # de-scale predictions following augmented inference (inverse operation)
166
+ if self.inplace:
167
+ p[..., :4] /= scale # de-scale
168
+ if flips == 2:
169
+ p[..., 1] = img_size[0] - p[..., 1] # de-flip ud
170
+ elif flips == 3:
171
+ p[..., 0] = img_size[1] - p[..., 0] # de-flip lr
172
+ else:
173
+ x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale
174
+ if flips == 2:
175
+ y = img_size[0] - y # de-flip ud
176
+ elif flips == 3:
177
+ x = img_size[1] - x # de-flip lr
178
+ p = torch.cat((x, y, wh, p[..., 4:]), -1)
179
+ return p
180
+
181
+ def _clip_augmented(self, y):
182
+ # Clip YOLOv5 augmented inference tails
183
+ nl = self.model[-1].nl # number of detection layers (P3-P5)
184
+ g = sum(4 ** x for x in range(nl)) # grid points
185
+ e = 1 # exclude layer count
186
+ i = (y[0].shape[1] // g) * sum(4 ** x for x in range(e)) # indices
187
+ y[0] = y[0][:, :-i] # large
188
+ i = (y[-1].shape[1] // g) * sum(4 ** (nl - 1 - x) for x in range(e)) # indices
189
+ y[-1] = y[-1][:, i:] # small
190
+ return y
191
+
192
+ def _profile_one_layer(self, m, x, dt):
193
+ c = isinstance(m, Detect) # is final layer, copy input as inplace fix
194
+ o = thop.profile(m, inputs=(x.copy() if c else x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPs
195
+ t = time_sync()
196
+ for _ in range(10):
197
+ m(x.copy() if c else x)
198
+ dt.append((time_sync() - t) * 100)
199
+ if m == self.model[0]:
200
+ LOGGER.info(f"{'time (ms)':>10s} {'GFLOPs':>10s} {'params':>10s} module")
201
+ LOGGER.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}')
202
+ if c:
203
+ LOGGER.info(f"{sum(dt):10.2f} {'-':>10s} {'-':>10s} Total")
204
+
205
+ def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency
206
+ # https://arxiv.org/abs/1708.02002 section 3.3
207
+ # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
208
+ m = self.model[-1] # Detect() module
209
+ for mi, s in zip(m.m, m.stride): # from
210
+ b = mi.bias.view(m.na, -1).detach() # conv.bias(255) to (3,85)
211
+ b[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
212
+ b[:, 5:] += math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # cls
213
+ mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
214
+
215
+ def _print_biases(self):
216
+ m = self.model[-1] # Detect() module
217
+ for mi in m.m: # from
218
+ b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85)
219
+ LOGGER.info(
220
+ ('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean()))
221
+
222
+ # def _print_weights(self):
223
+ # for m in self.model.modules():
224
+ # if type(m) is Bottleneck:
225
+ # LOGGER.info('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights
226
+
227
+ def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
228
+ LOGGER.info('Fusing layers... ')
229
+ for m in self.model.modules():
230
+ if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'):
231
+ m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
232
+ delattr(m, 'bn') # remove batchnorm
233
+ m.forward = m.forward_fuse # update forward
234
+ self.info()
235
+ return self
236
+
237
+ def info(self, verbose=False, img_size=640): # print model information
238
+ model_info(self, verbose, img_size)
239
+
240
+ def _apply(self, fn):
241
+ # Apply to(), cpu(), cuda(), half() to model tensors that are not parameters or registered buffers
242
+ self = super()._apply(fn)
243
+ m = self.model[-1] # Detect()
244
+ if isinstance(m, Detect):
245
+ m.stride = fn(m.stride)
246
+ m.grid = list(map(fn, m.grid))
247
+ if isinstance(m.anchor_grid, list):
248
+ m.anchor_grid = list(map(fn, m.anchor_grid))
249
+ return self
250
+
251
+
252
+ def parse_model(d, ch): # model_dict, input_channels(3)
253
+ LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10} {'module':<40}{'arguments':<30}")
254
+ anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
255
+ na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors
256
+ no = na * (nc + 5) # number of outputs = anchors * (classes + 5)
257
+
258
+ layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out
259
+ for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
260
+ m = eval(m) if isinstance(m, str) else m # eval strings
261
+ for j, a in enumerate(args):
262
+ try:
263
+ args[j] = eval(a) if isinstance(a, str) else a # eval strings
264
+ except NameError:
265
+ pass
266
+
267
+ n = n_ = max(round(n * gd), 1) if n > 1 else n # depth gain
268
+ if m in (Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,
269
+ BottleneckCSP, C3, C3TR, C3SPP, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x):
270
+ c1, c2 = ch[f], args[0]
271
+ if c2 != no: # if not output
272
+ c2 = make_divisible(c2 * gw, 8)
273
+
274
+ args = [c1, c2, *args[1:]]
275
+ if m in [BottleneckCSP, C3, C3TR, C3Ghost, C3x]:
276
+ args.insert(2, n) # number of repeats
277
+ n = 1
278
+ elif m is nn.BatchNorm2d:
279
+ args = [ch[f]]
280
+ elif m is Concat:
281
+ c2 = sum(ch[x] for x in f)
282
+ elif m is Detect:
283
+ args.append([ch[x] for x in f])
284
+ if isinstance(args[1], int): # number of anchors
285
+ args[1] = [list(range(args[1] * 2))] * len(f)
286
+ elif m is Contract:
287
+ c2 = ch[f] * args[0] ** 2
288
+ elif m is Expand:
289
+ c2 = ch[f] // args[0] ** 2
290
+ else:
291
+ c2 = ch[f]
292
+
293
+ m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module
294
+ t = str(m)[8:-2].replace('__main__.', '') # module type
295
+ np = sum(x.numel() for x in m_.parameters()) # number params
296
+ m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params
297
+ LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f} {t:<40}{str(args):<30}') # print
298
+ save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
299
+ layers.append(m_)
300
+ if i == 0:
301
+ ch = []
302
+ ch.append(c2)
303
+ return nn.Sequential(*layers), sorted(save)
304
+
305
+
306
+ if __name__ == '__main__':
307
+ parser = argparse.ArgumentParser()
308
+ parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml')
309
+ parser.add_argument('--batch-size', type=int, default=1, help='total batch size for all GPUs')
310
+ parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
311
+ parser.add_argument('--profile', action='store_true', help='profile model speed')
312
+ parser.add_argument('--line-profile', action='store_true', help='profile model speed layer by layer')
313
+ parser.add_argument('--test', action='store_true', help='test all yolo*.yaml')
314
+ opt = parser.parse_args()
315
+ opt.cfg = check_yaml(opt.cfg) # check YAML
316
+ print_args(vars(opt))
317
+ device = select_device(opt.device)
318
+
319
+ # Create model
320
+ im = torch.rand(opt.batch_size, 3, 640, 640).to(device)
321
+ model = Model(opt.cfg).to(device)
322
+
323
+ # Options
324
+ if opt.line_profile: # profile layer by layer
325
+ _ = model(im, profile=True)
326
+
327
+ elif opt.profile: # profile forward-backward
328
+ results = profile(input=im, ops=[model], n=3)
329
+
330
+ elif opt.test: # test all models
331
+ for cfg in Path(ROOT / 'models').rglob('yolo*.yaml'):
332
+ try:
333
+ _ = Model(cfg)
334
+ except Exception as e:
335
+ print(f'Error in {cfg}: {e}')
336
+
337
+ else: # report fused model summary
338
+ model.fuse()
models/yolov5l.yaml ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 1.0 # model depth multiple
6
+ width_multiple: 1.0 # layer channel multiple
7
+ anchors:
8
+ - [10,13, 16,30, 33,23] # P3/8
9
+ - [30,61, 62,45, 59,119] # P4/16
10
+ - [116,90, 156,198, 373,326] # P5/32
11
+
12
+ # YOLOv5 v6.0 backbone
13
+ backbone:
14
+ # [from, number, module, args]
15
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
16
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
17
+ [-1, 3, C3, [128]],
18
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
19
+ [-1, 6, C3, [256]],
20
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
21
+ [-1, 9, C3, [512]],
22
+ [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
23
+ [-1, 3, C3, [1024]],
24
+ [-1, 1, SPPF, [1024, 5]], # 9
25
+ ]
26
+
27
+ # YOLOv5 v6.0 head
28
+ head:
29
+ [[-1, 1, Conv, [512, 1, 1]],
30
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
31
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
32
+ [-1, 3, C3, [512, False]], # 13
33
+
34
+ [-1, 1, Conv, [256, 1, 1]],
35
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
36
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
37
+ [-1, 3, C3, [256, False]], # 17 (P3/8-small)
38
+
39
+ [-1, 1, Conv, [256, 3, 2]],
40
+ [[-1, 14], 1, Concat, [1]], # cat head P4
41
+ [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
42
+
43
+ [-1, 1, Conv, [512, 3, 2]],
44
+ [[-1, 10], 1, Concat, [1]], # cat head P5
45
+ [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
46
+
47
+ [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
48
+ ]
models/yolov5m.yaml ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
+
3
+ # Parameters
4
+ nc: 80 # number of classes
5
+ depth_multiple: 0.67 # model depth multiple
6
+ width_multiple: 0.75 # layer channel multiple
7
+ anchors:
8
+ - [10,13, 16,30, 33,23] # P3/8
9
+ - [30,61, 62,45, 59,119] # P4/16
10
+ - [116,90, 156,198, 373,326] # P5/32
11
+
12
+ # YOLOv5 v6.0 backbone
13
+ backbone:
14
+ # [from, number, module, args]
15
+ [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2
16
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
17
+ [-1, 3, C3, [128]],
18
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
19
+ [-1, 6, C3, [256]],
20
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
21
+ [-1, 9, C3, [512]],
22
+ [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
23
+ [-1, 3, C3, [1024]],
24
+ [-1, 1, SPPF, [1024, 5]], # 9
25
+ ]
26
+
27
+ # YOLOv5 v6.0 head
28
+ head:
29
+ [[-1, 1, Conv, [512, 1, 1]],
30
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
31
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
32
+ [-1, 3, C3, [512, False]], # 13
33
+
34
+ [-1, 1, Conv, [256, 1, 1]],
35
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
36
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
37
+ [-1, 3, C3, [256, False]], # 17 (P3/8-small)
38
+
39
+ [-1, 1, Conv, [256, 3, 2]],
40
+ [[-1, 14], 1, Concat, [1]], # cat head P4
41
+ [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
42
+
43
+ [-1, 1, Conv, [512, 3, 2]],
44
+ [[-1, 10], 1, Concat, [1]], # cat head P5
45
+ [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
46
+
47
+ [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
48
+ ]