| | --- |
| | license: mit |
| | --- |
| | |
| | # HockeyOrient SqueezeNet Model |
| |
|
| | <div style="background-color:#f8f9fa; color:black; border-left: 6px solid #28a745; padding: 10px; margin: 10px 0;"> |
| |
|
| | ๐ This model is trained on the <span style="color:red">HockeyOrient</span> dataset. |
| |
|
| | - ๐ Access the dataset used for training here: <a href="https://huggingface.co/datasets/SimulaMet-HOST/HockeyOrient" style="color:blue;">https://huggingface.co/datasets/SimulaMet-HOST/HockeyOrient</a> |
| | - ๐ Try the model in action with our interactive <span style="color:red">Hugging Face Space</span>: <a href="https://huggingface.co/spaces/SimulaMet-HOST/HockeyOrient" style="color:blue;">https://huggingface.co/spaces/SimulaMet-HOST/HockeyOrient</a> |
| |
|
| | </div> |
| |
|
| |
|
| | ## Overview |
| | This model is trained for ice hockey player orientation classification, classifying cropped player images into one of eight orientations: Top, Top-Right, Right, Bottom-Right, Bottom, Bottom-Left, Left, and Top-Left. It is based on the SqueezeNet architecture and achieves an F1 score of **75%**. |
| |
|
| | ## Model Details |
| | - **Architecture**: SqueezeNet (modified for 8-class classification). |
| | - **Training Configuration**: |
| | - Learning rate: 1e-4 |
| | - Batch size: 24 |
| | - Epochs: 300 |
| | - Weight decay: 1e-4 |
| | - Dropout: 0.3 |
| | - Early stopping: patience = 50 |
| | - Augmentations: Color jitter (no rotation) |
| | - **Performance**: |
| | - Accuracy: ~75% |
| | - F1 Score: ~75% |
| |
|
| | ## Usage |
| | 1. Extract frames from a video using OpenCV. |
| | 2. Detect player bounding boxes with a YOLO model. |
| | 3. Crop player images, resize them to 224x224, and preprocess with the given PyTorch transformations: |
| | - Resize to (224, 224) |
| | - Normalize with mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225]. |
| | 4. Classify the direction of each cropped player image using the SqueezeNet model: |
| | ```python |
| | with torch.no_grad(): |
| | output = model(image_tensor) |
| | direction_class = torch.argmax(output, dim=1).item() |
| | |
| | |
| | <div style="background-color:#e7f3ff; color:black; border-left: 6px solid #0056b3; padding: 12px; margin: 10px 0;"> |
| | |
| | <span style="color:black; font-weight:bold;">๐ฉ For any questions regarding this project, or to discuss potential collaboration and joint research opportunities, please contact:</span> |
| | |
| | <ul style="color:black;"> |
| | <li><span style="font-weight:bold; color:black;">Mehdi Houshmand</span>: <a href="mailto:mehdi@forzasys.com" style="color:blue; text-decoration:none;">mehdi@forzasys.com</a></li> |
| | <li><span style="font-weight:bold; color:black;">Cise Midoglu</span>: <a href="mailto:cise@forzasys.com" style="color:blue; text-decoration:none;">cise@forzasys.com</a></li> |
| | <li><span style="font-weight:bold; color:black;">Pรฅl Halvorsen</span>: <a href="mailto:paalh@simula.no" style="color:blue; text-decoration:none;">paalh@simula.no</a></li> |
| | </ul> |
| | |
| | </div> |
| | |
| | |