Instructions to use cvtechniques/VideoGameHandGestures with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- ultralytics
How to use cvtechniques/VideoGameHandGestures with ultralytics:
from ultralytics import YOLOvv8 model = YOLOvv8.from_pretrained("cvtechniques/VideoGameHandGestures") source = 'http://images.cocodataset.org/val2017/000000039769.jpg' model.predict(source=source, save=True) - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -89,7 +89,7 @@ Dataset availability: https://universe.roboflow.com/b-data-497-ws/hand-gesture-c
|
|
| 89 |
***
|
| 90 |
# Training Procedure
|
| 91 |
### Framework
|
| 92 |
-
Training was performed using
|
| 93 |
|
| 94 |
### Model Architecture
|
| 95 |
Base model: YOLOv8n (Nano)
|
|
|
|
| 89 |
***
|
| 90 |
# Training Procedure
|
| 91 |
### Framework
|
| 92 |
+
Training was performed in Google Colab using altered Python code from a YOLOv11 training run. Code was taken and altered for YOLOv8n from [here](https://oceancv.org/book/TrainandDeployObj_YOLO.html).
|
| 93 |
|
| 94 |
### Model Architecture
|
| 95 |
Base model: YOLOv8n (Nano)
|