The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
NFL Play Scoring Inference Demo
This repository demonstrates a lightweight video classification inference pipeline using PyTorchVideo's X3D-M model to score 2-second NFL play clips as part of a "start-of-play" or "end-of-play" detection task.
Overall Architecture
This inference pipeline is part of a larger AWS-based NFL play analysis system. The X3D model component (this repository) fits into the following architecture:
+---------------+ +---------------------+
| Mac Capture | -----> | Amazon S3: |
| Client | | - ingress-clips |
+-------+-------+ +---------+-----------+
|
| (PutEvent)
v
+-------+-------+ +-------+---------+
| Step |<----------| EventBridge |
| Functions | +-------+---------+
+--+----+------+ |
| | v
| | +------v-------+
| +------------------>| SageMaker: |
| Detects objects | YOLO11 |
| +------+------+
| |
| v
| +-----+------+
| | Amazon S3: |
| | processing |
| +-----+------+
| |
| Classify via
| |
| v
| +-----+------+
| | SageMaker: |
+-------------------------| X3D |
+-----+------+
|
+-----+------+
| | |
v | v
+------+--+ | +---+--------+
| Amazon | | | SageMaker: |
| S3: | | | Whisper |
| output- | | | Audio |
| plays | | | Transcript |
+---------+ | +-----+------+
| |
| v
| +-----+------+
| | Amazon S3: |
| | play- |
| | transcripts|
| +------------+
v
+-----+------+
| DynamoDB: |
| Metadata |
+------------+
This repository implements both the X3D video classification and Whisper audio transcription components that run on SageMaker to analyze video clips for play scoring characteristics and NFL commentary transcription. In the production architecture, Whisper transcription is applied only to identified plays rather than all video segments.
Local Processing Pipeline
The optimized processing pipeline separates video and audio analysis for maximum efficiency:
graph TD
A[Video Clips] --> B[Phase 1: Video Analysis]
B --> C[X3D Classification]
B --> D[NFL Play State Analysis]
B --> E[Play Boundary Detection]
E --> F{Play Detected?}
F -->|Yes| G[Phase 2: Audio Analysis<br/>Play Clips Only]
F -->|No| H[Skip Audio]
G --> I[Whisper Transcription]
G --> J[NFL Sports Corrections]
C --> K[classification.json]
D --> L[play_analysis.json]
E --> L
I --> M[transcripts.json<br/>Play Audio Only]
J --> M
H --> N[No Transcript]
style B fill:#e1f5fe
style G fill:#f3e5f5
style F fill:#fff3e0
style K fill:#c8e6c9
style L fill:#c8e6c9
style M fill:#c8e6c9
Repository Structure
βββ config.py # π§ Central configuration, directories, and constants
βββ video.py # π¬ Video classification and NFL play analysis
βββ audio.py # ποΈ Audio transcription with NFL enhancements
βββ yolo_processor.py # π― YOLO object detection preprocessing
βββ inference.py # π Backward compatibility interface
βββ run_all_clips.py # π Main processing pipeline orchestrator
βββ speed_test.py # β‘ Performance benchmarking tools
βββ data/ # πΌ Put your 2s video clips here (.mov or .mp4)
βββ segments/ # π Output directory for ContinuousScreenSplitter.swift
βββ ContinuousScreenSplitter.swift # π± Swift tool to capture screen and split into segments
βββ kinetics_classnames.json# π Kinetics-400 label map (auto-downloaded on first run)
βββ requirements.txt # π¦ Python dependencies
βββ classification.json # π Output: video classification results
βββ transcripts.json # π Output: audio transcription results
βββ play_analysis.json # π Output: NFL play analysis and boundaries
βββ ARCHITECTURE.md # π Detailed architecture documentation
Prerequisites
- Python 3.8+
- ffmpeg installed (for clip generation)
- git
- Miniconda or Anaconda (recommended for macOS compatibility)
- Xcode/Swift (for screen capture tool)
- BlackHole audio driver (recommended for system audio capture)
Setup
1. Clone the repo
git clone https://huggingface.co/datasets/rocket-wave/hf-video-scoring.git
cd hf-video-scoring
2. Create & activate environment
Using Conda (recommended on macOS Intel/Apple Silicon):
conda create -n nfl-play python=3.11 -y
conda activate nfl-play
conda install -y -c pytorch -c conda-forge pytorch torchvision torchaudio cpuonly
pip install pytorchvideo huggingface_hub ffmpeg-python transformers openai-whisper ultralytics
Using pip in a virtualenv:
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip setuptools wheel
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
pip install -r requirements.txt
Usage
1. Generate clips with screen capture (optional)
If you want to capture live screen content and automatically split it into 2-second segments:
swift ContinuousScreenSplitter.swift
This will:
- Capture your screen in real-time (1280x720, 30fps)
- Include system audio (requires BlackHole or similar audio driver)
- Automatically split into 2-second segments:
segment_000.mov,segment_001.mov, etc. - Save files to the
segments/directory - Run continuously until stopped with Ctrl+C
2. Place your clips
Copy your 2-second video segments into the data/ directory (or use segments/ from screen capture). Supported formats: .mov, .mp4.
3. Score a single clip
python inference.py data/segment_000.mov
This will:
- Run X3D video classification with NFL play state analysis
- Generate high-quality audio transcription using Whisper-Medium with NFL enhancements
- Print results to console and save to
classification.jsonandtranscripts.json
ποΈ Modular Architecture: The system now uses a clean modular design:
video.py: Video classification and play analysisaudio.py: Audio transcription with NFL correctionsconfig.py: Centralized configuration managementinference.py: Backward compatibility interface
4. Process clips with optimized pipeline
The pipeline is now optimized for continuous processing with separate video and audio phases:
π Default: YOLO + video analysis (auto-enabled for segments):
python run_all_clips.py --video-only
ποΈ Audio-only processing (add transcripts later):
python run_all_clips.py --audio-only
π¬ Full pipeline (YOLO + video + audio):
python run_all_clips.py
π§ͺ Testing with limited clips:
python run_all_clips.py --video-only --max-clips 5
β‘ Skip YOLO for faster processing:
python run_all_clips.py --no-yolo --video-only
The system:
- Processes video classification and play analysis first (Phase 1)
- Then processes audio transcription in batch (Phase 2, if enabled)
- Saves incremental results as processing continues
- Handles errors gracefully and provides detailed progress reporting
- Saves results to:
classification.json(video classification scores)transcripts.json(professional-quality NFL commentary transcriptions)play_analysis.json(NFL play state analysis and boundary detection)
Complete Workflow Examples
For Real-time/Continuous Processing:
# 1. Capture live screen content
swift ContinuousScreenSplitter.swift
# 2. Process with automatic YOLO object detection (default behavior)
source .venv/bin/activate
python run_all_clips.py --video-only
# 3. View immediate NFL play analysis
cat play_analysis.json | jq '.summary'
# 4. Add transcripts later when time allows
python run_all_clips.py --audio-only
# 5. View complete analysis
cat transcripts.json | jq '.[] | select(. != "")'
For Complete Analysis:
# 1. Full pipeline processing
python run_all_clips.py
# 2. View comprehensive results
cat play_analysis.json | jq '.summary'
cat transcripts.json | jq '.[] | select(. != "")'
For Development/Testing:
# Test with just a few clips
python run_all_clips.py --video-only --max-clips 3
# Speed testing
python speed_test.py
This workflow captures live NFL content, automatically segments it, and then analyzes each segment for both visual actions and audio content.
Output
- Console logs with classification results, transcripts,
[INFO]and[ERROR]messages. - classification.json:
{
"segment_001.mov": [ ["bobsledding", 0.003], ["archery", 0.003], ... ],
"segment_002.mov": [], // if failed or skipped
"segment_003.mov": [ ... ]
}
- transcripts.json:
{
"segment_001.mov": "I can tell you that Lamar Jackson right now is",
"segment_002.mov": "is the sixth best quarterback in the NFL",
"segment_003.mov": "He's basically like an extra line"
}
YOLO Object Detection Integration
The system now includes YOLOv11 integration for enhanced video analysis with object detection:
π― YOLO Enhancement Features
- Player Detection: Identifies football players, referees, and coaches
- Ball Tracking: Detects footballs and other sports equipment
- Spatial Analysis: Provides bounding boxes for key game elements
- Visual Annotations: Adds detection overlays while preserving original audio
- Selective Processing: Only applied to video classification, audio uses original clips
π§ YOLO Integration Workflow
- Phase 0: Raw clips in
/segments/β YOLO processing β/segments/yolo/ - Phase 1: Video classification uses YOLO-annotated clips
- Phase 2: Audio transcription uses original clips (preserves quality)
β‘ Performance Considerations
- YOLO Model: Nano size for speed vs. accuracy balance
- Parallel Processing: Video and audio pipelines remain independent
- Auto-Enabled: Automatically enabled for
segmentsdirectory - Control Flags: Use
--no-yoloto disable or--use-yoloto force enable
Audio Transcription Features
The system uses Whisper-Medium with NFL-specific enhancements for superior audio transcription:
π― NFL-Optimized Transcription
- Advanced Model: OpenAI Whisper-Medium (3B parameters) for professional-quality transcription
- Sports Vocabulary: 80+ NFL-specific terms including teams, positions, plays, and penalties
- Smart Corrections: Automatic correction of common football terminology mishears
- Player Recognition: Accurately transcribes player names and team references
- Commentary Context: Optimized for NFL broadcast commentary and analysis
π§ Audio Enhancement Pipeline
- Noise Filtering: High-pass (80Hz) and low-pass (8kHz) filters to remove audio artifacts
- Audio Normalization: Automatic level adjustment for consistent processing
- Silence Detection: Skips transcription for very quiet or short audio segments
- Error Handling: Graceful fallback for corrupted or problematic audio
π Quality Examples
Before: "TA N ONE THE SECOND"
After: "I can tell you that Lamar Jackson right now is"
Before: "FAY FOOLISH"
After: "is the sixth best quarterback in the NFL"
Video Classification Customization
Model choice: in
inference.py, setmodel_nameto one of:x3d_xs,x3d_s,x3d_m,x3d_l(currently usingx3d_m)
Clip length: default is 2 seconds; adjust
video.get_clip(0, 2.0)Sampling & crop: modify
preprocess()ininference.pyto changenum_samplesor spatial size.
Audio Transcription Customization
- Model choice: in
inference.py, changemodelfromwhisper-mediumto:whisper-base(faster, less accurate - not recommended for complex broadcasts)whisper-large(slower, highest accuracy)
- Sports vocabulary: Modify
NFL_SPORTS_CONTEXTlist to add custom terms - Audio filters: Adjust FFmpeg filters in
load_audio()for different audio quality - Corrections: Add custom corrections in
apply_sports_corrections()
Pipeline Customization
- Processing phases: Use
--video-onlyfor speed-critical applications - Batch sizes: Modify
VIDEO_SAVE_INTERVALandAUDIO_SAVE_INTERVALinconfig.py - Testing: Use
--max-clips Nto limit processing for development - File output: Customize output file names with
--classification-file,--transcript-file,--play-analysis-file
Modular Architecture Benefits
- π§ Centralized Configuration: All directories, paths, and settings in
config.py - π Flexible Directory Structure: Configurable input/output/cache directories
- π― Focused Development: Separate modules for video, audio, and configuration
- π§ͺ Better Testing: Individual modules can be tested in isolation
- β‘ Performance Tuning: Optimize video and audio processing independently
- π Scalability: Add new models or sports without affecting existing code
- π Backward Compatibility: Existing scripts continue to work unchanged
Configurable Directory Structure
All directories are now configurable through config.py:
# Input directories
DEFAULT_DATA_DIR = "data" # Default video clips
DEFAULT_SEGMENTS_DIR = "segments" # Screen capture segments
DEFAULT_YOLO_OUTPUT_DIR = "segments/yolo" # YOLO processed clips
# Cache directories (None = use system defaults)
TORCH_HUB_CACHE_DIR = None # PyTorch model cache
HUGGINGFACE_CACHE_DIR = None # HuggingFace model cache
DEFAULT_TEMP_DIR = None # Temporary processing
Troubleshooting
Video Issues
- InvalidDataError: moov atom not found: skip the damaged clipβ
run_all_clips.pylogs and continues. - Dimension or kernel size errors: ensure
num_samples>= model's temporal kernel (β₯13 for X3D). - PyTorch installation: use Conda on macOS for best compatibility.
Audio Transcription Issues
- Slow transcription: Whisper-Medium is large (~3GB) but provides optimal accuracy for NFL broadcasts. Use
--video-onlymode for speed-critical applications. - Poor audio quality: Ensure clean audio input. The system filters noise but very poor audio may still fail.
- Memory issues: Whisper-Medium requires ~4GB RAM. For lower-memory systems, edit
inference.pyto usewhisper-base. - Language detection: The system forces English. For other languages, modify the
languageparameter. - Processing interrupted: Use
--audio-onlyto resume transcription after video processing is complete.
Performance
Pipeline Processing Modes:
| Mode | Speed | Use Case |
|---|---|---|
| Video-only | ~2.3s/clip | Real-time play detection |
| Audio-only | ~13s/clip | Adding transcripts to existing analysis |
| Full pipeline | ~16s/clip | Complete analysis |
Processing Time Estimates:
- 10 clips: 23s (video-only) / 3 minutes (full)
- 100 clips: 4 minutes (video-only) / 27 minutes (full)
- 1 hour of footage: 1 hour (video-only) / 8 hours (full)
Command Reference
# Basic usage (auto-enables YOLO for segments directory)
python run_all_clips.py # Full pipeline with YOLO
python run_all_clips.py --video-only # YOLO + video analysis only
python run_all_clips.py --audio-only # Add transcripts later
# YOLO control
python run_all_clips.py --no-yolo --video-only # Skip YOLO for speed
python run_all_clips.py --use-yolo --input-dir data # Force YOLO for other directories
# Testing and development
python run_all_clips.py --max-clips 5 # Limit to 5 clips
python run_all_clips.py --video-only --max-clips 3 # Fast test with 3 clips
python speed_test.py # Performance benchmarking
# Custom files and directories
python run_all_clips.py --input-dir data --no-yolo # Process data directory without YOLO
python run_all_clips.py --classification-file my_results.json # Custom output file
# Single clip analysis
python inference.py data/segment_001.mov # Analyze one clip
Next Steps
- Fine-tune X3D on your own "start" vs "end" NFL play dataset.
- Real-time integration: Use
--video-onlymode for live processing, batch audio transcription offline. - Expand to other sports by modifying the sports vocabulary and play state logic.
- GPU acceleration: Add CUDA support for 3-5x faster processing.
- Parallel processing: Process multiple clips simultaneously for large datasets.
Adapted for NFL video segment scoring by rocket-wave.
- Downloads last month
- 6