Instructions to use cvtechniques/VideoGameHandGestures with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- ultralytics
How to use cvtechniques/VideoGameHandGestures with ultralytics:
from ultralytics import YOLOvv8 model = YOLOvv8.from_pretrained("cvtechniques/VideoGameHandGestures") source = 'http://images.cocodataset.org/val2017/000000039769.jpg' model.predict(source=source, save=True) - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -163,4 +163,5 @@ These issues highlight areas where the model could be improved with more diverse
|
|
| 163 |
***
|
| 164 |
# Limitations and Biases
|
| 165 |
### Known Failure Cases
|
| 166 |
-
<img alt= "Failure Cases" src="https://huggingface.co/cvtechniques/VideoGameHandGestures/resolve/main/failure_cases.png" width="1100" height="700"></img>
|
|
|
|
|
|
| 163 |
***
|
| 164 |
# Limitations and Biases
|
| 165 |
### Known Failure Cases
|
| 166 |
+
<img alt= "Failure Cases" src="https://huggingface.co/cvtechniques/VideoGameHandGestures/resolve/main/failure_cases.png" width="1100" height="700"></img>
|
| 167 |
+
The model struggled with some of the photos from the RPS dataset as these images contain complex backgrounds, partially occluded hands, or ambiguous gestures.
|