File size: 2,797 Bytes
3190eb6 53d3313 3190eb6 9cedc7e 6940ed8 9cedc7e 3190eb6 53d3313 3190eb6 df15616 0d4e7c0 df15616 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | ---
license: mit
---
# HockeyOrient SqueezeNet Model
<div style="background-color:#f8f9fa; color:black; border-left: 6px solid #28a745; padding: 10px; margin: 10px 0;">
🔗 This model is trained on the <span style="color:red">HockeyOrient</span> dataset.
- 📊 Access the dataset used for training here: <a href="https://huggingface.co/datasets/SimulaMet-HOST/HockeyOrient" style="color:blue;">https://huggingface.co/datasets/SimulaMet-HOST/HockeyOrient</a>
- 🚀 Try the model in action with our interactive <span style="color:red">Hugging Face Space</span>: <a href="https://huggingface.co/spaces/SimulaMet-HOST/HockeyOrient" style="color:blue;">https://huggingface.co/spaces/SimulaMet-HOST/HockeyOrient</a>
</div>
## Overview
This model is trained for ice hockey player orientation classification, classifying cropped player images into one of eight orientations: Top, Top-Right, Right, Bottom-Right, Bottom, Bottom-Left, Left, and Top-Left. It is based on the SqueezeNet architecture and achieves an F1 score of **75%**.
## Model Details
- **Architecture**: SqueezeNet (modified for 8-class classification).
- **Training Configuration**:
- Learning rate: 1e-4
- Batch size: 24
- Epochs: 300
- Weight decay: 1e-4
- Dropout: 0.3
- Early stopping: patience = 50
- Augmentations: Color jitter (no rotation)
- **Performance**:
- Accuracy: ~75%
- F1 Score: ~75%
## Usage
1. Extract frames from a video using OpenCV.
2. Detect player bounding boxes with a YOLO model.
3. Crop player images, resize them to 224x224, and preprocess with the given PyTorch transformations:
- Resize to (224, 224)
- Normalize with mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225].
4. Classify the direction of each cropped player image using the SqueezeNet model:
```python
with torch.no_grad():
output = model(image_tensor)
direction_class = torch.argmax(output, dim=1).item()
<div style="background-color:#e7f3ff; color:black; border-left: 6px solid #0056b3; padding: 12px; margin: 10px 0;">
<span style="color:black; font-weight:bold;">📩 For any questions regarding this project, or to discuss potential collaboration and joint research opportunities, please contact:</span>
<ul style="color:black;">
<li><span style="font-weight:bold; color:black;">Mehdi Houshmand</span>: <a href="mailto:mehdi@forzasys.com" style="color:blue; text-decoration:none;">mehdi@forzasys.com</a></li>
<li><span style="font-weight:bold; color:black;">Cise Midoglu</span>: <a href="mailto:cise@forzasys.com" style="color:blue; text-decoration:none;">cise@forzasys.com</a></li>
<li><span style="font-weight:bold; color:black;">Pål Halvorsen</span>: <a href="mailto:paalh@simula.no" style="color:blue; text-decoration:none;">paalh@simula.no</a></li>
</ul>
</div>
|