File size: 5,190 Bytes
a5378a8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
# 🏎 Drift Car Tracking & Zone Analysis Model

## πŸ“Œ Overview

This project is a computer vision model designed to **track drifting cars and quantify driver performance** using aerial (drone) footage. The system detects and tracks vehicles during tandem runs and measures how they interact with predefined drift zones.

The current implementation is a **proof of concept**, developed specifically for footage from **Evergreen Speedway in Monroe, Washington**.

---

## 🧠 Model Description

This model uses a YOLO-based framework to:

- Detect drift cars in tandem runs  
- Classify vehicles as:
  - `leader`
  - `chaser`
- Classify zones as:
  - `FrontZone`
  - `RearZone`
- Track vehicles across frames  
- Enable downstream analysis of zone interaction and timing  

### Training Details

- Fine-tuned from a pretrained YOLO model  
- Custom dataset manually annotated  
- Two datasets:
  - **Cars:** Bounding boxes for leader and chaser  
  - **Zones:** Segmentation masks for drift zones  

Zone interaction is computed using geometric methods (polygon overlap + time tracking), not learned directly by the model.

---

## 🎯 Intended Use

Designed for:

- Formula Drift-style competitions  
- Grassroots drifting events  
- Experimental motorsports analytics  

### Example Applications

- Measuring time spent in drift zones  
- Analyzing tandem behavior (leader vs. chaser)  
- Supporting judging with quantitative insights  
- Enhancing broadcast overlays  

---

## πŸ“Š Training Data

### πŸ“ Source

- Formula Drift Seattle 2025 PRO, Round 6 - Top 32  
https://www.youtube.com/watch?v=MuD-uxGQnrg&t=879s  

---

### πŸ”’ Dataset Size

#### πŸš— Cars
- 1,204 original β†’ 2,724 augmented  
- Split: 84% train / 12% val / 5% test  

#### 🟣 Zones
- 724 original β†’ 1,666 augmented  
- Split: 85% train / 9% val / 6% test  

---

### 🏷 Class Distribution

| Class      | Count |
|-----------|------|
| Leader    | 1,204 |
| Chaser    | 1,201 |
| FrontZone | 137   |
| RearZone  | 588   |

---

### ✏️ Annotation

- Fully manual annotation  
- Consistent labeling across frames  
- Handled occlusion, overlap, and tandem proximity  

---

### πŸ”§ Augmentation

- Rotation Β± 8Β°
- Saturation Β± 15%
- Brightness Β± 10%
- Blur 2px
- Mosaic = 0.2
- Scale Β± 15%
- Translate Β± 5% 
- Hsv_h  (Color tint) =  0.01
  
---
## βš™οΈ Training Procedure
- **Framework:** Ultralytics YOLO  
- **Models:**
  - Cars: YOLO26s  
  - Zones: YOLO26s-seg  
### πŸ’» Hardware
- NVIDIA A100 (Google Colab)
### ⏱ Training Time
- Cars: 80 epochs (~42 min)  
- Zones: 140 epochs (~1h 14min)  
### βš™οΈ Settings
- Batch size: 16
- Image size: 1024
- Workers: 8
- Cls: 2.5 (Only for Object Detection)
- No early stopping  
- Default preprocessing  
---
## πŸ“ˆ Evaluation Results
### πŸš— Car Model
| Metric    | Value  |
|----------|-------|
| Precision | 0.9904 |
| Recall    | 0.9792 |
| mAP@50    | 0.9882 |
| mAP@50-95 | 0.8937 |
### 🟣 Zone Model
| Metric    | Value  |
|----------|-------|
| Precision | 0.9919 |
| Recall    | 0.9952 |
| mAP@50    | 0.9948 |
| mAP@50-95 | 0.7064 |
---
## πŸ“‰ Key Visualizations
**Car Results**
<img src='results_cars.png' width="800">
<img src='confusion_matrix_cars.png' width="800">
**Zone Results**
<img src='results_zone.png' width="800">
<img src='confusion_matrix_zone.png' width="800">
---
## 🧠 Performance Analysis
### πŸš— Cars
**Strengths:**
- Very high precision and recall  
- Reliable detection and classification  
- Strong tracking foundation  
**Limitations:**
- Smoke occlusion affects detection  
- Close tandem overlap can cause confusion  
- Limited generalization beyond training conditions  
---
### 🟣 Zones
- High detection accuracy (mAP@50)  
- Lower boundary precision (mAP@50-95)
**Implication:**
- Good at identifying zones  
- Less accurate for exact boundaries β†’ impacts timing precision  
**Note:** Since zones are static, polygon-based methods may be more reliable than segmentation.
---
## ⚠️ Limitations and Biases
### 🚨 Failure Cases
- Heavy smoke β†’ missed or unstable detections  
- Close tandem β†’ overlap confusion  
- Camera motion β†’ inconsistent zone alignment  
- Edge-of-frame β†’ partial detections  
---
### πŸ“‰ Weak Areas
- Zone boundary precision  
- Leader vs. chaser ambiguity in tight proximity  
---
### πŸ“Š Data Bias
- Single track (Evergreen Speedway)  
- Single event and lighting condition  
- Fixed drone perspective  
---
### 🌦 Environmental Limits
Performance may degrade with:
- Smoke, blur, or occlusion  
- Lighting changes  
- Drone altitude variation  
- Camera movement  
---
### 🚫 Not Suitable For
- Official judging systems  
- General vehicle detection  
- Different tracks without recalibration  
- Other motorsports without adaptation  
---
### πŸ“ Dataset Limitations
- Underrepresented zone classes  
- Limited diversity (track, cars, conditions)  
- Few edge-case scenarios (spins, collisions)  
---
## 🏁 Summary
This model performs strongly within a controlled environment but is highly specialized. It should be viewed as a **proof-of-concept system** for drift analytics rather than a fully generalized or production-ready solution.