Instructions to use cvtechniques/VideoGameHandGestures with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- ultralytics
How to use cvtechniques/VideoGameHandGestures with ultralytics:
from ultralytics import YOLOvv8 model = YOLOvv8.from_pretrained("cvtechniques/VideoGameHandGestures") source = 'http://images.cocodataset.org/val2017/000000039769.jpg' model.predict(source=source, save=True) - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# Model Description
|
| 2 |
### Overview
|
| 3 |
This model detects hand gestures for use as input controls for video games. It uses object detection to recognize specific hand poses from a webcam or standard camera and translate them into game actions.
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: en
|
| 3 |
+
license: mit
|
| 4 |
+
tags:
|
| 5 |
+
- computer-vision
|
| 6 |
+
- object-detection
|
| 7 |
+
- yolov8
|
| 8 |
+
- gesture-recognition
|
| 9 |
+
- gaming
|
| 10 |
+
pipeline_tag: object-detection
|
| 11 |
+
library_name: ultralytics
|
| 12 |
+
---
|
| 13 |
# Model Description
|
| 14 |
### Overview
|
| 15 |
This model detects hand gestures for use as input controls for video games. It uses object detection to recognize specific hand poses from a webcam or standard camera and translate them into game actions.
|