Update README.md
Browse files
README.md
CHANGED
|
@@ -16,7 +16,7 @@ If you want to run MegaDetector directly, check out the [MegaDetector User Guide
|
|
| 16 |
|
| 17 |
# Variations
|
| 18 |
|
| 19 |
-
Two variations of the model are included: MDv5a and MDv5b.
|
| 20 |
|
| 21 |
MegaDetector v5b was trained only on camera trap images (several million images).
|
| 22 |
|
|
@@ -28,3 +28,9 @@ Both variations use the same architecture (YOLOv5x6).
|
|
| 28 |
|
| 29 |
Both variations were trained with a [GPL-licensed version of the YOLOv5 framework](https://github.com/ecologize/yolov5). No restrictions are placed on the use of the model by the MegaDetector developers; the applicable license may be determined by the code you use to run the model.
|
| 30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
# Variations
|
| 18 |
|
| 19 |
+
Two variations of the model are included: [MDv5a](https://huggingface.co/agentmorris/megadetector/resolve/main/md_v5a.0.0.pt) and [MDv5b](https://huggingface.co/agentmorris/megadetector/resolve/main/md_v5b.0.0.pt.
|
| 20 |
|
| 21 |
MegaDetector v5b was trained only on camera trap images (several million images).
|
| 22 |
|
|
|
|
| 28 |
|
| 29 |
Both variations were trained with a [GPL-licensed version of the YOLOv5 framework](https://github.com/ecologize/yolov5). No restrictions are placed on the use of the model by the MegaDetector developers; the applicable license may be determined by the code you use to run the model.
|
| 30 |
|
| 31 |
+
# Sample output
|
| 32 |
+
|
| 33 |
+
Here's a “teaser” image of what MegaDetector output looks like:
|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
Image credit University of Washington.
|