sophia-m commited on
Commit
0ddd7e8
·
verified ·
1 Parent(s): 8a373e7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -13
README.md CHANGED
@@ -48,19 +48,8 @@ Videos 2 & 3:
48
  ---
49
  # Model Selection
50
  ### YOLOv11 Object Detection Model
51
- I chose to use YOLOv11 to create a custom object detection model. The model was trained to detect the presence of Southern Sea Otters in water and on land. I chose an object detection model because it would allow me to add additional features to my model through supplementary code snippets. The features would produce additional visualizations to better interpret movement patterns and behaviors from the dataset. I manipulated the code [found in our textbook](https://oceancv.org/) to better identify and correctly label the otters in lower resolution cam footage, as well as in both land and water.
52
-
53
- ### Additional Features: Ultralytics Objects Counting in Regions & Heatmap
54
- Additionally, I utilized code from Ultralytics to produce an object counting in region video. I defined the 3 regions of interest (land, water, seclusion) through pixel coordinates over the input video, meaning the model was then able to differentiate between each zone and produce unique counts for each. I was able to manipulate the base code to track all 3 regions as once/
55
-
56
- I also utilized Ultralytics’ heatmap code to display a heatmap on the input video, showing which regions of the enclosure were most frequently occupied. I specifically focused the heatmap on the land zone, where the otters were spending most of their resting time. I manipulated the code to focus on the land zone, using pixel locations to create a custom region within the video.
57
-
58
  ```python
59
- # Load the YOLOv11 model
60
- model = YOLO("yolo11n.pt")
61
-
62
- # Path to the dataset configuration YAML file
63
- dataset_config = '/content/Dataset/data.yaml' # Path to the YAML file
64
 
65
  # Train the model
66
  results = model.train(
@@ -74,8 +63,13 @@ results = model.train(
74
  iou=0.5
75
  )
76
 
77
- print(results)
78
  ```
 
 
 
 
 
 
79
  ---
80
  # Model Assessment
81
  ### Here are the metrics I used to assess the accuracy and performance of my model during training.
 
48
  ---
49
  # Model Selection
50
  ### YOLOv11 Object Detection Model
51
+ I chose to use YOLOv11 to create a custom object detection model. The model was trained to detect the presence of Southern Sea Otters in water and on land. I chose an object detection model because it would allow me to add additional features to my model through supplementary code snippets. The features would produce additional visualizations to better interpret movement patterns and behaviors from the dataset. I manipulated the code [found in our textbook](https://oceancv.org/) to better identify and correctly label the otters in lower resolution cam footage, as well as in both land and water. These are my model parameters.
 
 
 
 
 
 
52
  ```python
 
 
 
 
 
53
 
54
  # Train the model
55
  results = model.train(
 
63
  iou=0.5
64
  )
65
 
 
66
  ```
67
+
68
+ ### Additional Features: Ultralytics Objects Counting in Regions & Heatmap
69
+ Additionally, I utilized code from Ultralytics to produce an object counting in region video. I defined the 3 regions of interest (land, water, seclusion) through pixel coordinates over the input video, meaning the model was then able to differentiate between each zone and produce unique counts for each. I was able to manipulate the base code to track all 3 regions at once.
70
+
71
+ I also utilized Ultralytics’ heatmap code to display a heatmap on the input video, showing which regions of the enclosure were most frequently occupied. I specifically focused the heatmap on the land zone, where the otters were spending most of their resting time. I manipulated the code to focus on the land zone, using pixel locations to create a custom region within the video.
72
+
73
  ---
74
  # Model Assessment
75
  ### Here are the metrics I used to assess the accuracy and performance of my model during training.