File size: 18,261 Bytes
5132568
 
 
 
bf8647a
 
 
 
 
 
13b0727
bf8647a
 
13b0727
 
 
bf8647a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ed2222b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bf8647a
 
 
 
 
 
 
ed2222b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bf8647a
5132568
 
 
5988ab2
ed2222b
 
5988ab2
ed2222b
 
 
 
 
 
 
 
 
 
 
 
5132568
 
 
 
 
 
 
 
2785f78
 
5132568
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46226af
5132568
 
5988ab2
5132568
 
 
 
 
 
 
 
 
 
 
 
 
 
2785f78
5132568
2785f78
5132568
2785f78
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5132568
 
 
 
 
7701582
ed2222b
 
 
 
 
 
 
 
 
5132568
bb1d98c
5132568
bb1d98c
 
5988ab2
bb1d98c
 
 
 
 
 
 
 
 
5988ab2
bb1d98c
 
 
 
 
5132568
bb1d98c
5132568
 
5988ab2
 
 
 
 
bb1d98c
 
 
 
 
 
7701582
1bf4ae1
bb1d98c
5132568
bb1d98c
2785f78
bb1d98c
2785f78
bb1d98c
2785f78
 
5988ab2
2785f78
5988ab2
bb1d98c
 
 
 
 
5988ab2
bb1d98c
 
 
 
2785f78
bb1d98c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2785f78
 
 
 
5132568
 
7701582
 
5132568
 
 
 
 
 
 
 
 
7701582
 
 
 
1bf4ae1
 
 
7701582
 
 
5988ab2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1bf4ae1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5132568
 
 
 
 
 
 
1bf4ae1
 
 
46226af
 
1bf4ae1
 
 
 
bb1d98c
 
 
ed2222b
bb1d98c
 
 
ed2222b
 
5988ab2
 
ed2222b
 
 
 
 
 
5988ab2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ed2222b
5132568
 
1bf4ae1
5132568
 
 
 
1bf4ae1
bb1d98c
1bf4ae1
46226af
1bf4ae1
bb1d98c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5988ab2
 
 
bb1d98c
 
5988ab2
 
 
 
bb1d98c
 
 
 
 
 
5988ab2
bb1d98c
 
 
 
 
1bf4ae1
5132568
 
 
bb1d98c
 
 
 
5132568
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
# NFL Play Scoring Inference Demo

This repository demonstrates a lightweight video classification inference pipeline using PyTorchVideo's X3D-M model to score 2-second NFL play clips as part of a "start-of-play" or "end-of-play" detection task.

## Overall Architecture

This inference pipeline is part of a larger AWS-based NFL play analysis system. The X3D model component (this repository) fits into the following architecture:

```plaintext
+---------------+         +---------------------+
| Mac Capture   | ----->  | Amazon S3:          |
| Client        |         | - ingress-clips     |
+-------+-------+         +---------+-----------+
                                    |
                                    | (PutEvent)
                                    v
+-------+-------+           +-------+---------+
| Step         |<----------| EventBridge     |
| Functions    |           +-------+---------+
+--+----+------+                   |
   |    |                          v
   |    |                   +------v-------+
   |    +------------------>| SageMaker:   |
   |    Detects objects     | YOLO11       |
   |                        +------+------+
   |                               |
   |                               v
   |                         +-----+------+
   |                         | Amazon S3: |
   |                         | processing |
   |                         +-----+------+
   |                               |
   |                         Classify via
   |                               |
   |                               v
   |                         +-----+------+
   |                         | SageMaker: |
   +-------------------------| X3D        |
                             +-----+------+
                                   |
                             +-----+------+
                             |     |      |
                             v     |      v
                      +------+--+  |  +---+--------+
                      | Amazon  |  |  | SageMaker: |
                      | S3:     |  |  | Whisper    |
                      | output- |  |  | Audio      |
                      | plays   |  |  | Transcript |
                      +---------+  |  +-----+------+
                                   |        |
                                   |        v
                                   |  +-----+------+
                                   |  | Amazon S3: |
                                   |  | play-      |
                                   |  | transcripts|
                                   |  +------------+
                                   v
                             +-----+------+
                             | DynamoDB:  |
                             | Metadata   |
                             +------------+
```

**This repository implements both the X3D video classification and Whisper audio transcription components** that run on SageMaker to analyze video clips for play scoring characteristics and NFL commentary transcription. In the production architecture, Whisper transcription is applied only to identified plays rather than all video segments.

### Local Processing Pipeline

The optimized processing pipeline separates video and audio analysis for maximum efficiency:

```mermaid
graph TD
    A[Video Clips] --> B[Phase 1: Video Analysis]
    B --> C[X3D Classification]
    B --> D[NFL Play State Analysis]
    B --> E[Play Boundary Detection]
    
    E --> F{Play Detected?}
    F -->|Yes| G[Phase 2: Audio Analysis<br/>Play Clips Only]
    F -->|No| H[Skip Audio]
    
    G --> I[Whisper Transcription]
    G --> J[NFL Sports Corrections]
    
    C --> K[classification.json]
    D --> L[play_analysis.json]
    E --> L
    I --> M[transcripts.json<br/>Play Audio Only]
    J --> M
    H --> N[No Transcript]
    
    style B fill:#e1f5fe
    style G fill:#f3e5f5
    style F fill:#fff3e0
    style K fill:#c8e6c9
    style L fill:#c8e6c9
    style M fill:#c8e6c9
```

## Repository Structure

```plaintext
├── config.py               # 🔧 Central configuration, directories, and constants
├── video.py                # 🎬 Video classification and NFL play analysis
├── audio.py                # 🎙️ Audio transcription with NFL enhancements
├── yolo_processor.py       # 🎯 YOLO object detection preprocessing
├── inference.py            # 🔄 Backward compatibility interface
├── run_all_clips.py        # 🚀 Main processing pipeline orchestrator
├── speed_test.py           # ⚡ Performance benchmarking tools
├── data/                   # 📼 Put your 2s video clips here (.mov or .mp4)
├── segments/               # 📁 Output directory for ContinuousScreenSplitter.swift
├── ContinuousScreenSplitter.swift # 📱 Swift tool to capture screen and split into segments
├── kinetics_classnames.json# 📋 Kinetics-400 label map (auto-downloaded on first run)
├── requirements.txt        # 📦 Python dependencies
├── classification.json     # 📊 Output: video classification results
├── transcripts.json        # 📝 Output: audio transcription results
├── play_analysis.json      # 🏈 Output: NFL play analysis and boundaries
└── ARCHITECTURE.md         # 📖 Detailed architecture documentation
```

## Prerequisites

* **Python 3.8+**
* **ffmpeg** installed (for clip generation)
* **git**
* **Miniconda or Anaconda** (recommended for macOS compatibility)
* **Xcode/Swift** (for screen capture tool)
* **BlackHole audio driver** (recommended for system audio capture)

## Setup

### 1. Clone the repo

```bash
git clone https://huggingface.co/datasets/rocket-wave/hf-video-scoring.git
cd hf-video-scoring
```

### 2. Create & activate environment

**Using Conda (recommended on macOS Intel/Apple Silicon):**

```bash
conda create -n nfl-play python=3.11 -y
conda activate nfl-play
conda install -y -c pytorch -c conda-forge pytorch torchvision torchaudio cpuonly
pip install pytorchvideo huggingface_hub ffmpeg-python transformers openai-whisper ultralytics
```

**Using pip in a virtualenv:**

```bash
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip setuptools wheel
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
pip install -r requirements.txt
```

## Usage

### 1. Generate clips with screen capture (optional)

If you want to capture live screen content and automatically split it into 2-second segments:

```bash
swift ContinuousScreenSplitter.swift
```

This will:
- Capture your screen in real-time (1280x720, 30fps)
- Include system audio (requires BlackHole or similar audio driver)
- Automatically split into 2-second segments: `segment_000.mov`, `segment_001.mov`, etc.
- Save files to the `segments/` directory
- Run continuously until stopped with Ctrl+C

### 2. Place your clips

Copy your 2-second video segments into the `data/` directory (or use `segments/` from screen capture). Supported formats: `.mov`, `.mp4`.

### 3. Score a single clip

```bash
python inference.py data/segment_000.mov
```

This will:
- Run X3D video classification with NFL play state analysis
- Generate high-quality audio transcription using Whisper-Medium with NFL enhancements
- Print results to console and save to `classification.json` and `transcripts.json`

**🏗️ Modular Architecture**: The system now uses a clean modular design:
- `video.py`: Video classification and play analysis
- `audio.py`: Audio transcription with NFL corrections  
- `config.py`: Centralized configuration management
- `inference.py`: Backward compatibility interface

### 4. Process clips with optimized pipeline

The pipeline is now optimized for continuous processing with separate video and audio phases:

**🚀 Default: YOLO + video analysis (auto-enabled for segments):**
```bash
python run_all_clips.py --video-only
```

**🎙️ Audio-only processing (add transcripts later):**
```bash
python run_all_clips.py --audio-only
```

**🎬 Full pipeline (YOLO + video + audio):**
```bash
python run_all_clips.py
```

**🧪 Testing with limited clips:**
```bash
python run_all_clips.py --video-only --max-clips 5
```

**⚡ Skip YOLO for faster processing:**
```bash
python run_all_clips.py --no-yolo --video-only
```

The system:
* Processes video classification and play analysis first (Phase 1)
* Then processes audio transcription in batch (Phase 2, if enabled)
* Saves incremental results as processing continues
* Handles errors gracefully and provides detailed progress reporting
* Saves results to:
  - `classification.json` (video classification scores)
  - `transcripts.json` (professional-quality NFL commentary transcriptions)
  - `play_analysis.json` (NFL play state analysis and boundary detection)

## Complete Workflow Examples

### For Real-time/Continuous Processing:
```bash
# 1. Capture live screen content
swift ContinuousScreenSplitter.swift

# 2. Process with automatic YOLO object detection (default behavior)
source .venv/bin/activate
python run_all_clips.py --video-only

# 3. View immediate NFL play analysis
cat play_analysis.json | jq '.summary'

# 4. Add transcripts later when time allows
python run_all_clips.py --audio-only

# 5. View complete analysis
cat transcripts.json | jq '.[] | select(. != "")'
```

### For Complete Analysis:
```bash
# 1. Full pipeline processing 
python run_all_clips.py

# 2. View comprehensive results
cat play_analysis.json | jq '.summary'
cat transcripts.json | jq '.[] | select(. != "")'
```

### For Development/Testing:
```bash
# Test with just a few clips
python run_all_clips.py --video-only --max-clips 3

# Speed testing
python speed_test.py
```

This workflow captures live NFL content, automatically segments it, and then analyzes each segment for both visual actions and audio content.

## Output

* **Console logs** with classification results, transcripts, `[INFO]` and `[ERROR]` messages.
* **classification.json**:

```json
{
  "segment_001.mov": [ ["bobsledding", 0.003], ["archery", 0.003], ... ],
  "segment_002.mov": [],  // if failed or skipped
  "segment_003.mov": [ ... ]
}
```

* **transcripts.json**:

```json
{
  "segment_001.mov": "I can tell you that Lamar Jackson right now is",
  "segment_002.mov": "is the sixth best quarterback in the NFL",
  "segment_003.mov": "He's basically like an extra line"
}
```

## YOLO Object Detection Integration

The system now includes **YOLOv11** integration for enhanced video analysis with object detection:

### 🎯 **YOLO Enhancement Features**
* **Player Detection**: Identifies football players, referees, and coaches
* **Ball Tracking**: Detects footballs and other sports equipment  
* **Spatial Analysis**: Provides bounding boxes for key game elements
* **Visual Annotations**: Adds detection overlays while preserving original audio
* **Selective Processing**: Only applied to video classification, audio uses original clips

### 🔧 **YOLO Integration Workflow**
1. **Phase 0**: Raw clips in `/segments/` → YOLO processing → `/segments/yolo/`
2. **Phase 1**: Video classification uses YOLO-annotated clips
3. **Phase 2**: Audio transcription uses original clips (preserves quality)

### ⚡ **Performance Considerations**
* **YOLO Model**: Nano size for speed vs. accuracy balance
* **Parallel Processing**: Video and audio pipelines remain independent
* **Auto-Enabled**: Automatically enabled for `segments` directory
* **Control Flags**: Use `--no-yolo` to disable or `--use-yolo` to force enable

## Audio Transcription Features

The system uses **Whisper-Medium** with NFL-specific enhancements for superior audio transcription:

### 🎯 **NFL-Optimized Transcription**
* **Advanced Model**: OpenAI Whisper-Medium (3B parameters) for professional-quality transcription
* **Sports Vocabulary**: 80+ NFL-specific terms including teams, positions, plays, and penalties
* **Smart Corrections**: Automatic correction of common football terminology mishears
* **Player Recognition**: Accurately transcribes player names and team references
* **Commentary Context**: Optimized for NFL broadcast commentary and analysis

### 🔧 **Audio Enhancement Pipeline**
* **Noise Filtering**: High-pass (80Hz) and low-pass (8kHz) filters to remove audio artifacts
* **Audio Normalization**: Automatic level adjustment for consistent processing
* **Silence Detection**: Skips transcription for very quiet or short audio segments
* **Error Handling**: Graceful fallback for corrupted or problematic audio

### 📊 **Quality Examples**
```
Before: "TA N ONE   THE SECOND"
After:  "I can tell you that Lamar Jackson right now is"

Before: "FAY FOOLISH" 
After:  "is the sixth best quarterback in the NFL"
```

## Video Classification Customization

* **Model choice**: in `inference.py`, set `model_name` to one of:

  * `x3d_xs`, `x3d_s`, `x3d_m`, `x3d_l` (currently using `x3d_m`)
* **Clip length**: default is 2 seconds; adjust `video.get_clip(0, 2.0)`
* **Sampling & crop**: modify `preprocess()` in `inference.py` to change `num_samples` or spatial size.

## Audio Transcription Customization

* **Model choice**: in `inference.py`, change `model` from `whisper-medium` to:
  * `whisper-base` (faster, less accurate - not recommended for complex broadcasts)
  * `whisper-large` (slower, highest accuracy)
* **Sports vocabulary**: Modify `NFL_SPORTS_CONTEXT` list to add custom terms
* **Audio filters**: Adjust FFmpeg filters in `load_audio()` for different audio quality
* **Corrections**: Add custom corrections in `apply_sports_corrections()`

### Pipeline Customization

* **Processing phases**: Use `--video-only` for speed-critical applications
* **Batch sizes**: Modify `VIDEO_SAVE_INTERVAL` and `AUDIO_SAVE_INTERVAL` in `config.py`
* **Testing**: Use `--max-clips N` to limit processing for development
* **File output**: Customize output file names with `--classification-file`, `--transcript-file`, `--play-analysis-file`

### Modular Architecture Benefits

* **🔧 Centralized Configuration**: All directories, paths, and settings in `config.py`
* **📁 Flexible Directory Structure**: Configurable input/output/cache directories
* **🎯 Focused Development**: Separate modules for video, audio, and configuration
* **🧪 Better Testing**: Individual modules can be tested in isolation
* **⚡ Performance Tuning**: Optimize video and audio processing independently
* **📈 Scalability**: Add new models or sports without affecting existing code
* **🔄 Backward Compatibility**: Existing scripts continue to work unchanged

### Configurable Directory Structure

All directories are now configurable through `config.py`:

```python
# Input directories
DEFAULT_DATA_DIR = "data"                    # Default video clips
DEFAULT_SEGMENTS_DIR = "segments"            # Screen capture segments  
DEFAULT_YOLO_OUTPUT_DIR = "segments/yolo"    # YOLO processed clips

# Cache directories (None = use system defaults)
TORCH_HUB_CACHE_DIR = None                   # PyTorch model cache
HUGGINGFACE_CACHE_DIR = None                 # HuggingFace model cache
DEFAULT_TEMP_DIR = None                      # Temporary processing
```

## Troubleshooting

### Video Issues
* **InvalidDataError: moov atom not found**: skip the damaged clip—`run_all_clips.py` logs and continues.
* **Dimension or kernel size errors**: ensure `num_samples` >= model's temporal kernel (≥13 for X3D).
* **PyTorch installation**: use Conda on macOS for best compatibility.

### Audio Transcription Issues
* **Slow transcription**: Whisper-Medium is large (~3GB) but provides optimal accuracy for NFL broadcasts. Use `--video-only` mode for speed-critical applications.
* **Poor audio quality**: Ensure clean audio input. The system filters noise but very poor audio may still fail.
* **Memory issues**: Whisper-Medium requires ~4GB RAM. For lower-memory systems, edit `inference.py` to use `whisper-base`.
* **Language detection**: The system forces English. For other languages, modify the `language` parameter.
* **Processing interrupted**: Use `--audio-only` to resume transcription after video processing is complete.

## Performance

### Pipeline Processing Modes:

| Mode | Speed | Use Case |
|------|-------|----------|
| **Video-only** | ~2.3s/clip | Real-time play detection |
| **Audio-only** | ~13s/clip | Adding transcripts to existing analysis |
| **Full pipeline** | ~16s/clip | Complete analysis |

### Processing Time Estimates:
- **10 clips**: 23s (video-only) / 3 minutes (full)
- **100 clips**: 4 minutes (video-only) / 27 minutes (full) 
- **1 hour of footage**: 1 hour (video-only) / 8 hours (full)

## Command Reference

```bash
# Basic usage (auto-enables YOLO for segments directory)
python run_all_clips.py                                    # Full pipeline with YOLO
python run_all_clips.py --video-only                       # YOLO + video analysis only
python run_all_clips.py --audio-only                       # Add transcripts later

# YOLO control
python run_all_clips.py --no-yolo --video-only             # Skip YOLO for speed
python run_all_clips.py --use-yolo --input-dir data        # Force YOLO for other directories

# Testing and development  
python run_all_clips.py --max-clips 5                      # Limit to 5 clips
python run_all_clips.py --video-only --max-clips 3        # Fast test with 3 clips
python speed_test.py                                       # Performance benchmarking

# Custom files and directories
python run_all_clips.py --input-dir data --no-yolo         # Process data directory without YOLO
python run_all_clips.py --classification-file my_results.json  # Custom output file

# Single clip analysis
python inference.py data/segment_001.mov                   # Analyze one clip
```

## Next Steps

* **Fine-tune** X3D on your own "start" vs "end" NFL play dataset.
* **Real-time integration**: Use `--video-only` mode for live processing, batch audio transcription offline.
* **Expand** to other sports by modifying the sports vocabulary and play state logic.
* **GPU acceleration**: Add CUDA support for 3-5x faster processing.
* **Parallel processing**: Process multiple clips simultaneously for large datasets.

---

*Adapted for NFL video segment scoring by rocket-wave.*