Update README.md
Browse files
README.md
CHANGED
|
@@ -3,19 +3,49 @@ base_model:
|
|
| 3 |
- Ultralytics/YOLO11
|
| 4 |
---
|
| 5 |
|
| 6 |
-
Clearly establishes the broader context and purpose of the model card, articulating the need for the model within its application domain and explaining its environmental or real-world relevance.
|
| 7 |
|
| 8 |
# Context and Need:
|
| 9 |
This object detection model has been created for the purpose of being able to identify glass eels in marine enviorments. This model aims to provide scientists with a way to monitor juvenile eels as they are still not fully understood.
|
| 10 |
|
| 11 |
-
|
|
|
|
| 12 |
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
*ONC Acknowledgement*
|
| 16 |
|
|
|
|
|
|
|
|
|
|
| 17 |
Clearly presents and explains the choice of model, including a detailed rationale for its selection and a clear plan for implementation.
|
| 18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
Clearly describes the performance metrics used to assess the model, includes at least one performance-related graph (e.g., training/validation loss, accuracy, or clustering context figure), and provides a detailed explanation of its significance. For unsupervised learning (e.g., KMeans), a context figure showing cluster distribution and meaning is required.
|
| 20 |
|
|
|
|
| 21 |
Clearly describes an example study that could be conducted using the model, including a specific hypothesis and a well-justified explanation of why the model is an effective tool for the study. For unsupervised projects, includes clear instructions for submitting a markdown document with a context figure explaining cluster meanings.
|
|
|
|
| 3 |
- Ultralytics/YOLO11
|
| 4 |
---
|
| 5 |
|
|
|
|
| 6 |
|
| 7 |
# Context and Need:
|
| 8 |
This object detection model has been created for the purpose of being able to identify glass eels in marine enviorments. This model aims to provide scientists with a way to monitor juvenile eels as they are still not fully understood.
|
| 9 |
|
| 10 |
+
# Dataset:
|
| 11 |
+
The dataset used is available in the files and versions under the name ONCCameraFootage.zip. This footage was used to train the model on detection of glass eels in low light enviorments. All of the data is taken from MP4 files from the ONC Cambridge Bay underwater network, annotated by hand, then had the following augments applied to them:
|
| 12 |
|
| 13 |
+
* Horizontal and vertical flips
|
| 14 |
+
* +- 10 Degree sheers horizontal and vertical
|
| 15 |
+
* Grayscale applied to 15% of the images
|
| 16 |
+
* Noise on up to 1% of pixels
|
| 17 |
|
| 18 |
*ONC Acknowledgement*
|
| 19 |
|
| 20 |
+
In this dataset there are 957 annoted instances of glass eels through varying yearly conditions.
|
| 21 |
+
|
| 22 |
+
# Choice of Model:
|
| 23 |
Clearly presents and explains the choice of model, including a detailed rationale for its selection and a clear plan for implementation.
|
| 24 |
|
| 25 |
+
When choosing a model I wanted to choose YoloV11 because at the time it was the most up to date Yolo model and performed the most accurately and quickly out of the family of models. I choose object detection because I wanted to have a model that would allow for counting of glass eels for scientific purposes as they tend to be hard to spot.
|
| 26 |
+
|
| 27 |
+
## Implementation:
|
| 28 |
+
You will need to do the following to use the model
|
| 29 |
+
|
| 30 |
+
```bash
|
| 31 |
+
pip install ultralytics
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
``` python
|
| 36 |
+
model = YOLO("Glass_Eel.pt") # Load model with proper path
|
| 37 |
+
|
| 38 |
+
results = model.predict(
|
| 39 |
+
source=r"sourcefile", # The file that you want to use the model to annotate
|
| 40 |
+
conf=0.25,
|
| 41 |
+
iou=0.7,
|
| 42 |
+
save=True,
|
| 43 |
+
project=r"/save-directory", # Custom save directory
|
| 44 |
+
name="predictions"
|
| 45 |
+
)
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
Clearly describes the performance metrics used to assess the model, includes at least one performance-related graph (e.g., training/validation loss, accuracy, or clustering context figure), and provides a detailed explanation of its significance. For unsupervised learning (e.g., KMeans), a context figure showing cluster distribution and meaning is required.
|
| 49 |
|
| 50 |
+
|
| 51 |
Clearly describes an example study that could be conducted using the model, including a specific hypothesis and a well-justified explanation of why the model is an effective tool for the study. For unsupervised projects, includes clear instructions for submitting a markdown document with a context figure explaining cluster meanings.
|