yourmt3 integration and refactor
Browse files- .gitignore +13 -0
- README.md +170 -63
- backend/.dockerignore +49 -0
- backend/.env.example +5 -1
- backend/Dockerfile +5 -0
- backend/Dockerfile.worker +5 -0
- backend/{config.py β app_config.py} +9 -4
- backend/{utils.py β app_utils.py} +0 -0
- backend/celery_app.py +1 -1
- backend/main.py +129 -3
- backend/pipeline.py +311 -123
- backend/requirements.txt +10 -3
- backend/scripts/diagnose_pipeline.py +1 -1
- backend/scripts/test_accuracy.py +1 -1
- backend/scripts/test_e2e.py +1 -1
- backend/scripts/test_mps_performance.sh +55 -0
- backend/scripts/test_quick_verify.py +1 -1
- backend/tasks.py +6 -4
- backend/tests/test_pipeline_fixes.py +1 -1
- backend/tests/test_pipeline_monophonic.py +1 -1
- backend/tests/test_utils.py +1 -1
- backend/tests/test_yourmt3_integration.py +296 -0
- backend/ymt/amt +1 -0
- backend/ymt/yourmt3_core +1 -0
- backend/yourmt3_wrapper.py +211 -0
- docker-compose.yml +1 -0
- docs/architecture/tech-stack.md +20 -11
- docs/backend/pipeline.md +114 -62
- docs/getting-started.md +6 -5
- docs/research/ml-models.md +87 -66
- docs/testing/backend-testing.md +0 -520
- docs/testing/baseline-accuracy.md +0 -178
- docs/testing/failure-modes.md +0 -216
- docs/testing/frontend-testing.md +0 -653
- docs/testing/overview.md +0 -315
- docs/testing/test-videos.md +0 -371
- frontend/Dockerfile +1 -1
- frontend/package.json +1 -1
- frontend/src/components/JobSubmission.css +25 -4
- frontend/src/components/JobSubmission.tsx +112 -8
- start.sh +139 -0
- stop.sh +18 -0
.gitignore
CHANGED
|
@@ -224,6 +224,11 @@ backend/*.musicxml
|
|
| 224 |
backend/*.mid
|
| 225 |
backend/*.wav
|
| 226 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 227 |
# Frontend
|
| 228 |
frontend/node_modules/
|
| 229 |
frontend/dist/
|
|
@@ -242,6 +247,14 @@ storage/temp/*
|
|
| 242 |
# Temp files
|
| 243 |
/tmp/
|
| 244 |
*.tmp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 245 |
|
| 246 |
# Docker volumes
|
| 247 |
docker-compose.override.yml
|
|
|
|
| 224 |
backend/*.mid
|
| 225 |
backend/*.wav
|
| 226 |
|
| 227 |
+
# YourMT3+ temporary files
|
| 228 |
+
backend/ymt/model_output/
|
| 229 |
+
backend/ymt/*.mid
|
| 230 |
+
backend/ymt/*.log
|
| 231 |
+
|
| 232 |
# Frontend
|
| 233 |
frontend/node_modules/
|
| 234 |
frontend/dist/
|
|
|
|
| 247 |
# Temp files
|
| 248 |
/tmp/
|
| 249 |
*.tmp
|
| 250 |
+
*.temp
|
| 251 |
+
|
| 252 |
+
# Logs
|
| 253 |
+
logs/
|
| 254 |
+
*.log
|
| 255 |
+
|
| 256 |
+
# macOS
|
| 257 |
+
.DS_Store
|
| 258 |
|
| 259 |
# Docker volumes
|
| 260 |
docker-compose.override.yml
|
README.md
CHANGED
|
@@ -13,20 +13,20 @@ Rescored transcribes YouTube videos to professional-quality music notation:
|
|
| 13 |
**Tech Stack**:
|
| 14 |
- **Backend**: Python/FastAPI + Celery + Redis
|
| 15 |
- **Frontend**: React + VexFlow (notation) + Tone.js (playback)
|
| 16 |
-
- **ML**: Demucs (source separation) + basic-pitch (
|
| 17 |
|
| 18 |
## Quick Start
|
| 19 |
|
| 20 |
### Prerequisites
|
| 21 |
|
| 22 |
-
- **
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
|
| 29 |
-
###
|
| 30 |
|
| 31 |
```bash
|
| 32 |
# Clone repository
|
|
@@ -34,7 +34,49 @@ git clone https://github.com/yourusername/rescored.git
|
|
| 34 |
cd rescored
|
| 35 |
```
|
| 36 |
|
| 37 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
YouTube requires authentication for video downloads (as of December 2024). You **MUST** export your YouTube cookies before the application will work.
|
| 40 |
|
|
@@ -53,29 +95,65 @@ YouTube requires authentication for video downloads (as of December 2024). You *
|
|
| 53 |
|
| 54 |
3. **Place Cookie File**
|
| 55 |
```bash
|
| 56 |
-
# Create storage directory
|
| 57 |
mkdir -p storage
|
| 58 |
-
|
| 59 |
# Move the exported file (adjust path if needed)
|
| 60 |
mv ~/Downloads/youtube.com_cookies.txt ./storage/youtube_cookies.txt
|
| 61 |
-
|
| 62 |
-
# OR on Windows:
|
| 63 |
-
# move %USERPROFILE%\Downloads\youtube.com_cookies.txt storage\youtube_cookies.txt
|
| 64 |
```
|
| 65 |
|
| 66 |
4. **Start Services**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
```bash
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
# Services will be available at:
|
| 71 |
-
# - Frontend: http://localhost:5173
|
| 72 |
-
# - Backend API: http://localhost:8000
|
| 73 |
-
# - API Docs: http://localhost:8000/docs
|
| 74 |
```
|
| 75 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
**Verification:**
|
| 77 |
```bash
|
| 78 |
-
|
| 79 |
```
|
| 80 |
You should see the file listed.
|
| 81 |
|
|
@@ -91,50 +169,52 @@ You should see the file listed.
|
|
| 91 |
|
| 92 |
**Why Is This Required?** YouTube implemented bot detection in late 2024 that blocks unauthenticated downloads. Even though our tool is for legitimate transcription purposes, YouTube's systems can't distinguish it from scrapers. By providing your cookies, you're proving you're a real user who has agreed to YouTube's terms of service.
|
| 93 |
|
| 94 |
-
###
|
| 95 |
-
|
| 96 |
-
**Backend**:
|
| 97 |
-
```bash
|
| 98 |
-
cd backend
|
| 99 |
-
|
| 100 |
-
# Create virtual environment
|
| 101 |
-
python3 -m venv venv
|
| 102 |
-
source venv/bin/activate # On Windows: venv\Scripts\activate
|
| 103 |
-
|
| 104 |
-
# Install dependencies
|
| 105 |
-
pip install -r requirements.txt
|
| 106 |
|
| 107 |
-
|
| 108 |
-
cp .env.example .env
|
| 109 |
|
| 110 |
-
|
| 111 |
-
redis-server
|
| 112 |
|
| 113 |
-
|
| 114 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 115 |
|
| 116 |
-
#
|
| 117 |
-
|
|
|
|
|
|
|
| 118 |
```
|
| 119 |
|
| 120 |
-
|
| 121 |
-
``
|
| 122 |
-
|
|
|
|
| 123 |
|
| 124 |
-
|
| 125 |
-
|
|
|
|
|
|
|
| 126 |
|
| 127 |
-
|
| 128 |
-
npm run dev
|
| 129 |
-
```
|
| 130 |
|
| 131 |
## Usage
|
| 132 |
|
| 133 |
-
1.
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 138 |
|
| 139 |
## MVP Features
|
| 140 |
|
|
@@ -186,7 +266,13 @@ Comprehensive documentation is available in the [`docs/`](docs/) directory:
|
|
| 186 |
|
| 187 |
## Performance
|
| 188 |
|
| 189 |
-
**With
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 190 |
- Download: ~10 seconds
|
| 191 |
- Source separation: ~45 seconds
|
| 192 |
- Transcription: ~5 seconds
|
|
@@ -200,7 +286,15 @@ Comprehensive documentation is available in the [`docs/`](docs/) directory:
|
|
| 200 |
|
| 201 |
## Accuracy Expectations
|
| 202 |
|
| 203 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 204 |
|
| 205 |
## Development
|
| 206 |
|
|
@@ -226,16 +320,28 @@ Once the backend is running, visit:
|
|
| 226 |
|
| 227 |
**Worker not processing jobs?**
|
| 228 |
- Check Redis is running: `redis-cli ping` (should return PONG)
|
| 229 |
-
-
|
|
|
|
| 230 |
|
| 231 |
-
**GPU not
|
| 232 |
-
-
|
| 233 |
-
-
|
| 234 |
-
- Set `
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 235 |
|
| 236 |
**YouTube download fails?**
|
|
|
|
|
|
|
| 237 |
- Video may be age-restricted or private
|
| 238 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 239 |
|
| 240 |
## Contributing
|
| 241 |
|
|
@@ -247,8 +353,9 @@ MIT License - see [LICENSE](LICENSE) for details.
|
|
| 247 |
|
| 248 |
## Acknowledgments
|
| 249 |
|
|
|
|
| 250 |
- **Demucs** (Meta AI Research) - Source separation
|
| 251 |
-
- **basic-pitch** (Spotify) -
|
| 252 |
- **VexFlow** - Music notation rendering
|
| 253 |
- **Tone.js** - Web audio synthesis
|
| 254 |
|
|
|
|
| 13 |
**Tech Stack**:
|
| 14 |
- **Backend**: Python/FastAPI + Celery + Redis
|
| 15 |
- **Frontend**: React + VexFlow (notation) + Tone.js (playback)
|
| 16 |
+
- **ML**: Demucs (source separation) + YourMT3+ (transcription, 80-85% accuracy) + basic-pitch (fallback)
|
| 17 |
|
| 18 |
## Quick Start
|
| 19 |
|
| 20 |
### Prerequisites
|
| 21 |
|
| 22 |
+
- **macOS** (Apple Silicon recommended for MPS GPU acceleration) OR **Linux** (with NVIDIA GPU)
|
| 23 |
+
- **Python 3.10** (required for madmom compatibility)
|
| 24 |
+
- **Node.js 18+**
|
| 25 |
+
- **Redis 7+**
|
| 26 |
+
- **FFmpeg**
|
| 27 |
+
- **Homebrew** (macOS only, for Redis installation)
|
| 28 |
|
| 29 |
+
### Installation
|
| 30 |
|
| 31 |
```bash
|
| 32 |
# Clone repository
|
|
|
|
| 34 |
cd rescored
|
| 35 |
```
|
| 36 |
|
| 37 |
+
### Setup Redis (macOS)
|
| 38 |
+
|
| 39 |
+
```bash
|
| 40 |
+
# Install Redis via Homebrew
|
| 41 |
+
brew install redis
|
| 42 |
+
|
| 43 |
+
# Start Redis service
|
| 44 |
+
brew services start redis
|
| 45 |
+
|
| 46 |
+
# Verify Redis is running
|
| 47 |
+
redis-cli ping # Should return PONG
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
### Setup Backend (Python 3.10 + MPS GPU Acceleration)
|
| 51 |
+
|
| 52 |
+
```bash
|
| 53 |
+
cd backend
|
| 54 |
+
|
| 55 |
+
# Activate Python 3.10 virtual environment (already configured)
|
| 56 |
+
source .venv/bin/activate
|
| 57 |
+
|
| 58 |
+
# Verify Python version
|
| 59 |
+
python --version # Should show Python 3.10.x
|
| 60 |
+
|
| 61 |
+
# Backend dependencies are already installed in .venv
|
| 62 |
+
# If you need to reinstall:
|
| 63 |
+
# pip install -r requirements.txt
|
| 64 |
+
|
| 65 |
+
# Copy environment file and configure
|
| 66 |
+
cp .env.example .env
|
| 67 |
+
# Edit .env - ensure YOURMT3_DEVICE=mps for Apple Silicon GPU acceleration
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
### Setup Frontend
|
| 71 |
+
|
| 72 |
+
```bash
|
| 73 |
+
cd frontend
|
| 74 |
+
|
| 75 |
+
# Install dependencies
|
| 76 |
+
npm install
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
### β οΈ REQUIRED: YouTube Cookies Setup
|
| 80 |
|
| 81 |
YouTube requires authentication for video downloads (as of December 2024). You **MUST** export your YouTube cookies before the application will work.
|
| 82 |
|
|
|
|
| 95 |
|
| 96 |
3. **Place Cookie File**
|
| 97 |
```bash
|
| 98 |
+
# Create storage directory if it doesn't exist
|
| 99 |
mkdir -p storage
|
| 100 |
+
|
| 101 |
# Move the exported file (adjust path if needed)
|
| 102 |
mv ~/Downloads/youtube.com_cookies.txt ./storage/youtube_cookies.txt
|
|
|
|
|
|
|
|
|
|
| 103 |
```
|
| 104 |
|
| 105 |
4. **Start Services**
|
| 106 |
+
|
| 107 |
+
**Option A: Single Command (Recommended)**
|
| 108 |
+
```bash
|
| 109 |
+
./start.sh
|
| 110 |
+
```
|
| 111 |
+
This starts all services in the background. Logs are written to `logs/` directory.
|
| 112 |
+
|
| 113 |
+
To stop all services:
|
| 114 |
+
```bash
|
| 115 |
+
./stop.sh
|
| 116 |
+
# Or press Ctrl+C in the terminal running start.sh
|
| 117 |
+
```
|
| 118 |
+
|
| 119 |
+
To view logs while running:
|
| 120 |
+
```bash
|
| 121 |
+
tail -f logs/api.log # Backend API logs
|
| 122 |
+
tail -f logs/worker.log # Celery worker logs
|
| 123 |
+
tail -f logs/frontend.log # Frontend logs
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
**Option B: Manual (3 separate terminals)**
|
| 127 |
+
|
| 128 |
+
**Terminal 1 - Backend API:**
|
| 129 |
+
```bash
|
| 130 |
+
cd backend
|
| 131 |
+
source .venv/bin/activate
|
| 132 |
+
uvicorn main:app --host 0.0.0.0 --port 8000 --reload
|
| 133 |
+
```
|
| 134 |
+
|
| 135 |
+
**Terminal 2 - Celery Worker:**
|
| 136 |
+
```bash
|
| 137 |
+
cd backend
|
| 138 |
+
source .venv/bin/activate
|
| 139 |
+
# Use --pool=solo on macOS to avoid fork() crashes with ML libraries
|
| 140 |
+
celery -A tasks worker --loglevel=info --pool=solo
|
| 141 |
+
```
|
| 142 |
+
|
| 143 |
+
**Terminal 3 - Frontend:**
|
| 144 |
```bash
|
| 145 |
+
cd frontend
|
| 146 |
+
npm run dev
|
|
|
|
|
|
|
|
|
|
|
|
|
| 147 |
```
|
| 148 |
|
| 149 |
+
**Services will be available at:**
|
| 150 |
+
- Frontend: http://localhost:5173
|
| 151 |
+
- Backend API: http://localhost:8000
|
| 152 |
+
- API Docs: http://localhost:8000/docs
|
| 153 |
+
|
| 154 |
**Verification:**
|
| 155 |
```bash
|
| 156 |
+
ls -lh storage/youtube_cookies.txt
|
| 157 |
```
|
| 158 |
You should see the file listed.
|
| 159 |
|
|
|
|
| 169 |
|
| 170 |
**Why Is This Required?** YouTube implemented bot detection in late 2024 that blocks unauthenticated downloads. Even though our tool is for legitimate transcription purposes, YouTube's systems can't distinguish it from scrapers. By providing your cookies, you're proving you're a real user who has agreed to YouTube's terms of service.
|
| 171 |
|
| 172 |
+
### YourMT3+ Setup
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 173 |
|
| 174 |
+
The backend uses **YourMT3+** as the primary transcription model (80-85% accuracy) with automatic fallback to basic-pitch (70% accuracy) if YourMT3+ is unavailable.
|
|
|
|
| 175 |
|
| 176 |
+
**YourMT3+ model files and source code are already included in the repository.** The model checkpoint (~536MB) is stored via Git LFS in `backend/ymt/yourmt3_core/`.
|
|
|
|
| 177 |
|
| 178 |
+
**Verify YourMT3+ is working:**
|
| 179 |
+
```bash
|
| 180 |
+
# Start backend (if not already running)
|
| 181 |
+
cd backend
|
| 182 |
+
source .venv/bin/activate
|
| 183 |
+
uvicorn main:app --host 0.0.0.0 --port 8000 --reload
|
| 184 |
|
| 185 |
+
# In another terminal, test YourMT3+ loading
|
| 186 |
+
cd backend
|
| 187 |
+
source .venv/bin/activate
|
| 188 |
+
python -c "from yourmt3_wrapper import YourMT3Transcriber; t = YourMT3Transcriber(device='mps'); print('β YourMT3+ loaded successfully!')"
|
| 189 |
```
|
| 190 |
|
| 191 |
+
You should see:
|
| 192 |
+
- `Model loaded successfully on mps`
|
| 193 |
+
- `GPU available: True (mps), used: True`
|
| 194 |
+
- `β YourMT3+ loaded successfully!`
|
| 195 |
|
| 196 |
+
**GPU Acceleration:**
|
| 197 |
+
- **Apple Silicon (M1/M2/M3/M4):** Uses MPS (Metal Performance Shaders) with 16-bit mixed precision for optimal performance. Default is `YOURMT3_DEVICE=mps` in `.env`.
|
| 198 |
+
- **NVIDIA GPU:** Change `YOURMT3_DEVICE=cuda` in `.env`
|
| 199 |
+
- **CPU Only:** Change `YOURMT3_DEVICE=cpu` in `.env` (will be much slower)
|
| 200 |
|
| 201 |
+
**Important:** The symlink at `backend/ymt/yourmt3_core/amt/src/amt/logs` must point to `../../logs` for checkpoint loading to work. This is already configured in the repository.
|
|
|
|
|
|
|
| 202 |
|
| 203 |
## Usage
|
| 204 |
|
| 205 |
+
1. **Ensure all services are running:**
|
| 206 |
+
- Redis: `brew services list | grep redis` (should show "started")
|
| 207 |
+
- Backend API: Terminal 1 should show "Uvicorn running on http://0.0.0.0:8000"
|
| 208 |
+
- Celery Worker: Terminal 2 should show "celery@hostname ready"
|
| 209 |
+
- Frontend: Terminal 3 should show "Local: http://localhost:5173"
|
| 210 |
+
|
| 211 |
+
2. Open [http://localhost:5173](http://localhost:5173)
|
| 212 |
+
3. Paste a YouTube URL (piano music recommended for best results)
|
| 213 |
+
4. Wait for transcription:
|
| 214 |
+
- **With MPS/GPU**: ~1-2 minutes
|
| 215 |
+
- **With CPU**: ~10-15 minutes
|
| 216 |
+
5. Edit the notation in the interactive editor
|
| 217 |
+
6. Export as MusicXML or MIDI
|
| 218 |
|
| 219 |
## MVP Features
|
| 220 |
|
|
|
|
| 266 |
|
| 267 |
## Performance
|
| 268 |
|
| 269 |
+
**With Apple Silicon MPS (M1/M2/M3/M4)**:
|
| 270 |
+
- Download: ~10 seconds
|
| 271 |
+
- Source separation (Demucs): ~30-60 seconds
|
| 272 |
+
- Transcription (YourMT3+): ~20-30 seconds
|
| 273 |
+
- **Total: ~1-2 minutes**
|
| 274 |
+
|
| 275 |
+
**With NVIDIA GPU (RTX 3080)**:
|
| 276 |
- Download: ~10 seconds
|
| 277 |
- Source separation: ~45 seconds
|
| 278 |
- Transcription: ~5 seconds
|
|
|
|
| 286 |
|
| 287 |
## Accuracy Expectations
|
| 288 |
|
| 289 |
+
**With YourMT3+ (recommended):**
|
| 290 |
+
- Simple piano: **80-85% accurate**
|
| 291 |
+
- Complex pieces: **70-75% accurate**
|
| 292 |
+
|
| 293 |
+
**With basic-pitch (fallback):**
|
| 294 |
+
- Simple piano: **70-75% accurate**
|
| 295 |
+
- Complex pieces: **60-70% accurate**
|
| 296 |
+
|
| 297 |
+
The interactive editor is designed to make fixing errors easy regardless of which transcription model is used.
|
| 298 |
|
| 299 |
## Development
|
| 300 |
|
|
|
|
| 320 |
|
| 321 |
**Worker not processing jobs?**
|
| 322 |
- Check Redis is running: `redis-cli ping` (should return PONG)
|
| 323 |
+
- If Redis isn't running: `brew services start redis`
|
| 324 |
+
- Check worker logs in Terminal 2
|
| 325 |
|
| 326 |
+
**MPS/GPU not being used?**
|
| 327 |
+
- Verify MPS is available: `python -c "import torch; print(torch.backends.mps.is_available())"`
|
| 328 |
+
- Check `.env` has `YOURMT3_DEVICE=mps`
|
| 329 |
+
- For NVIDIA GPU: Set `YOURMT3_DEVICE=cuda`
|
| 330 |
+
|
| 331 |
+
**YourMT3+ fails to load?**
|
| 332 |
+
- Ensure Python 3.10 is being used: `python --version`
|
| 333 |
+
- Check symlink exists: `ls -la backend/ymt/yourmt3_core/amt/src/amt/logs`
|
| 334 |
+
- Verify checkpoint file exists: `ls -lh backend/ymt/yourmt3_core/logs/2024/*/checkpoints/last.ckpt`
|
| 335 |
|
| 336 |
**YouTube download fails?**
|
| 337 |
+
- Ensure `storage/youtube_cookies.txt` exists and is recent
|
| 338 |
+
- Export fresh cookies from a NEW incognito window
|
| 339 |
- Video may be age-restricted or private
|
| 340 |
+
- Update yt-dlp: `source .venv/bin/activate && pip install -U yt-dlp`
|
| 341 |
+
|
| 342 |
+
**Module import errors?**
|
| 343 |
+
- Make sure you're in the virtual environment: `source backend/.venv/bin/activate`
|
| 344 |
+
- Reinstall requirements: `pip install -r requirements.txt`
|
| 345 |
|
| 346 |
## Contributing
|
| 347 |
|
|
|
|
| 353 |
|
| 354 |
## Acknowledgments
|
| 355 |
|
| 356 |
+
- **YourMT3+** (KAIST) - State-of-the-art music transcription ([Paper](https://arxiv.org/abs/2407.04822))
|
| 357 |
- **Demucs** (Meta AI Research) - Source separation
|
| 358 |
+
- **basic-pitch** (Spotify) - Fallback audio transcription
|
| 359 |
- **VexFlow** - Music notation rendering
|
| 360 |
- **Tone.js** - Web audio synthesis
|
| 361 |
|
backend/.dockerignore
ADDED
|
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Python
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[cod]
|
| 4 |
+
*$py.class
|
| 5 |
+
*.so
|
| 6 |
+
.Python
|
| 7 |
+
*.egg-info/
|
| 8 |
+
dist/
|
| 9 |
+
build/
|
| 10 |
+
*.egg
|
| 11 |
+
|
| 12 |
+
# Virtual environments
|
| 13 |
+
.venv/
|
| 14 |
+
venv/
|
| 15 |
+
ENV/
|
| 16 |
+
env/
|
| 17 |
+
|
| 18 |
+
# IDE
|
| 19 |
+
.vscode/
|
| 20 |
+
.idea/
|
| 21 |
+
*.swp
|
| 22 |
+
*.swo
|
| 23 |
+
*~
|
| 24 |
+
|
| 25 |
+
# Testing
|
| 26 |
+
.pytest_cache/
|
| 27 |
+
.coverage
|
| 28 |
+
htmlcov/
|
| 29 |
+
*.log
|
| 30 |
+
|
| 31 |
+
# Storage and temp files
|
| 32 |
+
storage/
|
| 33 |
+
*.wav
|
| 34 |
+
*.mid
|
| 35 |
+
*.musicxml
|
| 36 |
+
*.tmp
|
| 37 |
+
*.temp
|
| 38 |
+
|
| 39 |
+
# YourMT3+ large files (4.5GB model repo - too large for Docker image)
|
| 40 |
+
ymt/yourmt3_core/model_repo/
|
| 41 |
+
ymt/yourmt3_core/.git/
|
| 42 |
+
ymt/model_output/
|
| 43 |
+
ymt/*.mid
|
| 44 |
+
ymt/*.log
|
| 45 |
+
|
| 46 |
+
# Environment files
|
| 47 |
+
.env
|
| 48 |
+
.env.local
|
| 49 |
+
.env.production
|
backend/.env.example
CHANGED
|
@@ -2,7 +2,7 @@
|
|
| 2 |
REDIS_URL=redis://localhost:6379/0
|
| 3 |
|
| 4 |
# Storage Configuration
|
| 5 |
-
STORAGE_PATH=/
|
| 6 |
|
| 7 |
# API Configuration
|
| 8 |
API_HOST=0.0.0.0
|
|
@@ -14,3 +14,7 @@ MAX_VIDEO_DURATION=900 # 15 minutes in seconds
|
|
| 14 |
|
| 15 |
# CORS Origins (comma-separated)
|
| 16 |
CORS_ORIGINS=http://localhost:5173,http://localhost:3000
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
REDIS_URL=redis://localhost:6379/0
|
| 3 |
|
| 4 |
# Storage Configuration
|
| 5 |
+
STORAGE_PATH=../storage
|
| 6 |
|
| 7 |
# API Configuration
|
| 8 |
API_HOST=0.0.0.0
|
|
|
|
| 14 |
|
| 15 |
# CORS Origins (comma-separated)
|
| 16 |
CORS_ORIGINS=http://localhost:5173,http://localhost:3000
|
| 17 |
+
|
| 18 |
+
# YourMT3+ Use
|
| 19 |
+
USE_YOURMT3_TRANSCRIPTION=true
|
| 20 |
+
YOURMT3_DEVICE=mps
|
backend/Dockerfile
CHANGED
|
@@ -4,6 +4,8 @@ FROM python:3.11-slim
|
|
| 4 |
RUN apt-get update && apt-get install -y \
|
| 5 |
ffmpeg \
|
| 6 |
git \
|
|
|
|
|
|
|
| 7 |
&& rm -rf /var/lib/apt/lists/*
|
| 8 |
|
| 9 |
# Set working directory
|
|
@@ -12,6 +14,9 @@ WORKDIR /app
|
|
| 12 |
# Copy requirements
|
| 13 |
COPY requirements.txt .
|
| 14 |
|
|
|
|
|
|
|
|
|
|
| 15 |
# Install Python dependencies
|
| 16 |
RUN pip install --no-cache-dir -r requirements.txt
|
| 17 |
|
|
|
|
| 4 |
RUN apt-get update && apt-get install -y \
|
| 5 |
ffmpeg \
|
| 6 |
git \
|
| 7 |
+
gcc \
|
| 8 |
+
build-essential \
|
| 9 |
&& rm -rf /var/lib/apt/lists/*
|
| 10 |
|
| 11 |
# Set working directory
|
|
|
|
| 14 |
# Copy requirements
|
| 15 |
COPY requirements.txt .
|
| 16 |
|
| 17 |
+
# Install build dependencies for madmom
|
| 18 |
+
RUN pip install --no-cache-dir Cython 'numpy<2.0.0'
|
| 19 |
+
|
| 20 |
# Install Python dependencies
|
| 21 |
RUN pip install --no-cache-dir -r requirements.txt
|
| 22 |
|
backend/Dockerfile.worker
CHANGED
|
@@ -8,6 +8,8 @@ RUN apt-get update && apt-get install -y \
|
|
| 8 |
python3-pip \
|
| 9 |
ffmpeg \
|
| 10 |
git \
|
|
|
|
|
|
|
| 11 |
&& rm -rf /var/lib/apt/lists/*
|
| 12 |
|
| 13 |
# Set working directory
|
|
@@ -16,6 +18,9 @@ WORKDIR /app
|
|
| 16 |
# Copy requirements
|
| 17 |
COPY requirements.txt .
|
| 18 |
|
|
|
|
|
|
|
|
|
|
| 19 |
# Install Python dependencies
|
| 20 |
RUN pip3 install --no-cache-dir -r requirements.txt
|
| 21 |
|
|
|
|
| 8 |
python3-pip \
|
| 9 |
ffmpeg \
|
| 10 |
git \
|
| 11 |
+
gcc \
|
| 12 |
+
build-essential \
|
| 13 |
&& rm -rf /var/lib/apt/lists/*
|
| 14 |
|
| 15 |
# Set working directory
|
|
|
|
| 18 |
# Copy requirements
|
| 19 |
COPY requirements.txt .
|
| 20 |
|
| 21 |
+
# Install build dependencies for madmom
|
| 22 |
+
RUN pip3 install --no-cache-dir Cython 'numpy<2.0.0'
|
| 23 |
+
|
| 24 |
# Install Python dependencies
|
| 25 |
RUN pip3 install --no-cache-dir -r requirements.txt
|
| 26 |
|
backend/{config.py β app_config.py}
RENAMED
|
@@ -21,9 +21,9 @@ class Settings(BaseSettings):
|
|
| 21 |
max_video_duration: int = 900 # 15 minutes
|
| 22 |
|
| 23 |
# Transcription Configuration (basic-pitch)
|
| 24 |
-
onset_threshold: float = 0.
|
| 25 |
-
frame_threshold: float = 0.
|
| 26 |
-
minimum_note_length: int =
|
| 27 |
minimum_frequency_hz: float = 65.0 # C2 (65 Hz) - filter low-frequency noise like F1
|
| 28 |
maximum_frequency_hz: float | None = None # No upper limit for piano range
|
| 29 |
|
|
@@ -61,7 +61,12 @@ class Settings(BaseSettings):
|
|
| 61 |
# Python compatibility: madmom runtime patch enables Python 3.10+ support
|
| 62 |
use_madmom_tempo_detection: bool = True # Multi-scale tempo (eliminates octave errors)
|
| 63 |
use_beat_synchronous_quantization: bool = True # Beat-aligned quantization (eliminates double quantization)
|
| 64 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
# Grand Staff Configuration
|
| 67 |
enable_grand_staff: bool = True # Split piano into treble + bass clefs
|
|
|
|
| 21 |
max_video_duration: int = 900 # 15 minutes
|
| 22 |
|
| 23 |
# Transcription Configuration (basic-pitch)
|
| 24 |
+
onset_threshold: float = 0.3 # Note onset confidence (0-1). Lower = more notes detected
|
| 25 |
+
frame_threshold: float = 0.3 # Frame activation threshold (0-1). Basic-pitch default
|
| 26 |
+
minimum_note_length: int = 58 # Minimum note samples (~58ms at 44.1kHz). Basic-pitch default
|
| 27 |
minimum_frequency_hz: float = 65.0 # C2 (65 Hz) - filter low-frequency noise like F1
|
| 28 |
maximum_frequency_hz: float | None = None # No upper limit for piano range
|
| 29 |
|
|
|
|
| 61 |
# Python compatibility: madmom runtime patch enables Python 3.10+ support
|
| 62 |
use_madmom_tempo_detection: bool = True # Multi-scale tempo (eliminates octave errors)
|
| 63 |
use_beat_synchronous_quantization: bool = True # Beat-aligned quantization (eliminates double quantization)
|
| 64 |
+
|
| 65 |
+
# Transcription Service Configuration
|
| 66 |
+
use_yourmt3_transcription: bool = True # YourMT3+ for 80-85% accuracy (default, falls back to basic-pitch)
|
| 67 |
+
transcription_service_url: str = "http://localhost:8000" # Main API URL (YourMT3+ integrated)
|
| 68 |
+
transcription_service_timeout: int = 300 # Timeout for transcription requests (seconds)
|
| 69 |
+
yourmt3_device: str = "mps" # Device for YourMT3+: 'mps' (Apple Silicon), 'cuda' (NVIDIA), or 'cpu'
|
| 70 |
|
| 71 |
# Grand Staff Configuration
|
| 72 |
enable_grand_staff: bool = True # Split piano into treble + bass clefs
|
backend/{utils.py β app_utils.py}
RENAMED
|
File without changes
|
backend/celery_app.py
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
"""Celery application configuration."""
|
| 2 |
from celery import Celery
|
| 3 |
from kombu import Exchange, Queue
|
| 4 |
-
from
|
| 5 |
|
| 6 |
# Initialize Celery
|
| 7 |
celery_app = Celery(
|
|
|
|
| 1 |
"""Celery application configuration."""
|
| 2 |
from celery import Celery
|
| 3 |
from kombu import Exchange, Queue
|
| 4 |
+
from app_config import settings
|
| 5 |
|
| 6 |
# Initialize Celery
|
| 7 |
celery_app = Celery(
|
backend/main.py
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
"""FastAPI application for Rescored backend."""
|
| 2 |
-
from fastapi import FastAPI, HTTPException, WebSocket, WebSocketDisconnect, Request
|
| 3 |
from fastapi.middleware.cors import CORSMiddleware
|
| 4 |
from fastapi.responses import FileResponse
|
| 5 |
from pydantic import BaseModel, HttpUrl
|
|
@@ -11,10 +11,21 @@ from starlette.responses import JSONResponse
|
|
| 11 |
import redis
|
| 12 |
import json
|
| 13 |
import asyncio
|
| 14 |
-
|
| 15 |
-
|
|
|
|
|
|
|
|
|
|
| 16 |
from tasks import process_transcription_task
|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
# Initialize FastAPI
|
| 19 |
app = FastAPI(
|
| 20 |
title="Rescored API",
|
|
@@ -25,6 +36,10 @@ app = FastAPI(
|
|
| 25 |
# Redis client (initialized before middleware)
|
| 26 |
redis_client = redis.Redis.from_url(settings.redis_url, decode_responses=True)
|
| 27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
# === Rate Limiting Middleware ===
|
| 30 |
|
|
@@ -81,6 +96,38 @@ app.add_middleware(
|
|
| 81 |
app.add_middleware(RateLimitMiddleware)
|
| 82 |
|
| 83 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
# === Request/Response Models ===
|
| 85 |
|
| 86 |
class TranscribeRequest(BaseModel):
|
|
@@ -402,6 +449,85 @@ async def health_check():
|
|
| 402 |
}
|
| 403 |
|
| 404 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 405 |
if __name__ == "__main__":
|
| 406 |
import uvicorn
|
| 407 |
uvicorn.run(
|
|
|
|
| 1 |
"""FastAPI application for Rescored backend."""
|
| 2 |
+
from fastapi import FastAPI, HTTPException, WebSocket, WebSocketDisconnect, Request, File, UploadFile
|
| 3 |
from fastapi.middleware.cors import CORSMiddleware
|
| 4 |
from fastapi.responses import FileResponse
|
| 5 |
from pydantic import BaseModel, HttpUrl
|
|
|
|
| 11 |
import redis
|
| 12 |
import json
|
| 13 |
import asyncio
|
| 14 |
+
import tempfile
|
| 15 |
+
import shutil
|
| 16 |
+
from typing import Optional
|
| 17 |
+
from app_config import settings
|
| 18 |
+
from app_utils import validate_youtube_url, check_video_availability
|
| 19 |
from tasks import process_transcription_task
|
| 20 |
|
| 21 |
+
# YourMT3+ transcription service
|
| 22 |
+
try:
|
| 23 |
+
from yourmt3_wrapper import YourMT3Transcriber
|
| 24 |
+
YOURMT3_AVAILABLE = True
|
| 25 |
+
except ImportError as e:
|
| 26 |
+
YOURMT3_AVAILABLE = False
|
| 27 |
+
print(f"WARNING: YourMT3+ not available: {e}")
|
| 28 |
+
|
| 29 |
# Initialize FastAPI
|
| 30 |
app = FastAPI(
|
| 31 |
title="Rescored API",
|
|
|
|
| 36 |
# Redis client (initialized before middleware)
|
| 37 |
redis_client = redis.Redis.from_url(settings.redis_url, decode_responses=True)
|
| 38 |
|
| 39 |
+
# YourMT3+ transcriber (loaded on startup)
|
| 40 |
+
yourmt3_transcriber: Optional[YourMT3Transcriber] = None
|
| 41 |
+
YOURMT3_TEMP_DIR = Path(tempfile.gettempdir()) / "yourmt3_service"
|
| 42 |
+
|
| 43 |
|
| 44 |
# === Rate Limiting Middleware ===
|
| 45 |
|
|
|
|
| 96 |
app.add_middleware(RateLimitMiddleware)
|
| 97 |
|
| 98 |
|
| 99 |
+
# === Application Lifecycle Events ===
|
| 100 |
+
|
| 101 |
+
@app.on_event("startup")
|
| 102 |
+
async def startup_event():
|
| 103 |
+
"""Initialize YourMT3+ model on startup."""
|
| 104 |
+
global yourmt3_transcriber
|
| 105 |
+
|
| 106 |
+
if not YOURMT3_AVAILABLE or not settings.use_yourmt3_transcription:
|
| 107 |
+
print("YourMT3+ transcription disabled or unavailable")
|
| 108 |
+
return
|
| 109 |
+
|
| 110 |
+
try:
|
| 111 |
+
YOURMT3_TEMP_DIR.mkdir(parents=True, exist_ok=True)
|
| 112 |
+
print(f"Loading YourMT3+ model (device: {settings.yourmt3_device})...")
|
| 113 |
+
yourmt3_transcriber = YourMT3Transcriber(
|
| 114 |
+
model_name="YPTF.MoE+Multi (noPS)",
|
| 115 |
+
device=settings.yourmt3_device
|
| 116 |
+
)
|
| 117 |
+
print("β YourMT3+ model loaded successfully")
|
| 118 |
+
except Exception as e:
|
| 119 |
+
print(f"β Failed to load YourMT3+ model: {e}")
|
| 120 |
+
print(" Service will fall back to basic-pitch for transcription")
|
| 121 |
+
yourmt3_transcriber = None
|
| 122 |
+
|
| 123 |
+
|
| 124 |
+
@app.on_event("shutdown")
|
| 125 |
+
async def shutdown_event():
|
| 126 |
+
"""Clean up temporary files on shutdown."""
|
| 127 |
+
if YOURMT3_TEMP_DIR.exists():
|
| 128 |
+
shutil.rmtree(YOURMT3_TEMP_DIR, ignore_errors=True)
|
| 129 |
+
|
| 130 |
+
|
| 131 |
# === Request/Response Models ===
|
| 132 |
|
| 133 |
class TranscribeRequest(BaseModel):
|
|
|
|
| 449 |
}
|
| 450 |
|
| 451 |
|
| 452 |
+
# === YourMT3+ Transcription Endpoints ===
|
| 453 |
+
|
| 454 |
+
@app.get("/api/v1/yourmt3/health")
|
| 455 |
+
async def yourmt3_health():
|
| 456 |
+
"""
|
| 457 |
+
Check YourMT3+ transcription service health.
|
| 458 |
+
|
| 459 |
+
Returns model status, device, and availability.
|
| 460 |
+
"""
|
| 461 |
+
if not YOURMT3_AVAILABLE:
|
| 462 |
+
return {
|
| 463 |
+
"status": "unavailable",
|
| 464 |
+
"model_loaded": False,
|
| 465 |
+
"reason": "YourMT3+ dependencies not installed"
|
| 466 |
+
}
|
| 467 |
+
|
| 468 |
+
model_loaded = yourmt3_transcriber is not None
|
| 469 |
+
|
| 470 |
+
return {
|
| 471 |
+
"status": "healthy" if model_loaded else "degraded",
|
| 472 |
+
"model_loaded": model_loaded,
|
| 473 |
+
"model_name": "YPTF.MoE+Multi (noPS)" if model_loaded else "not loaded",
|
| 474 |
+
"device": yourmt3_transcriber.device if model_loaded else "unknown"
|
| 475 |
+
}
|
| 476 |
+
|
| 477 |
+
|
| 478 |
+
@app.post("/api/v1/yourmt3/transcribe")
|
| 479 |
+
async def yourmt3_transcribe(file: UploadFile = File(...)):
|
| 480 |
+
"""
|
| 481 |
+
Transcribe audio file to MIDI using YourMT3+.
|
| 482 |
+
|
| 483 |
+
This endpoint is used by the pipeline for direct transcription.
|
| 484 |
+
"""
|
| 485 |
+
if yourmt3_transcriber is None:
|
| 486 |
+
raise HTTPException(status_code=503, detail="YourMT3+ model not loaded")
|
| 487 |
+
|
| 488 |
+
# Save uploaded file
|
| 489 |
+
input_file = YOURMT3_TEMP_DIR / f"input_{uuid4().hex}_{file.filename}"
|
| 490 |
+
try:
|
| 491 |
+
with open(input_file, "wb") as f:
|
| 492 |
+
content = await file.read()
|
| 493 |
+
f.write(content)
|
| 494 |
+
|
| 495 |
+
# Transcribe
|
| 496 |
+
output_dir = YOURMT3_TEMP_DIR / f"output_{uuid4().hex}"
|
| 497 |
+
output_dir.mkdir(parents=True, exist_ok=True)
|
| 498 |
+
|
| 499 |
+
midi_path = yourmt3_transcriber.transcribe_audio(input_file, output_dir)
|
| 500 |
+
|
| 501 |
+
# Return MIDI file
|
| 502 |
+
return FileResponse(
|
| 503 |
+
path=str(midi_path),
|
| 504 |
+
media_type="audio/midi",
|
| 505 |
+
filename=midi_path.name
|
| 506 |
+
)
|
| 507 |
+
|
| 508 |
+
except Exception as e:
|
| 509 |
+
raise HTTPException(status_code=500, detail=f"Transcription failed: {str(e)}")
|
| 510 |
+
finally:
|
| 511 |
+
# Clean up input file
|
| 512 |
+
if input_file.exists():
|
| 513 |
+
input_file.unlink()
|
| 514 |
+
|
| 515 |
+
|
| 516 |
+
@app.get("/api/v1/yourmt3/models")
|
| 517 |
+
async def yourmt3_models():
|
| 518 |
+
"""List available YourMT3+ model variants."""
|
| 519 |
+
return {
|
| 520 |
+
"models": [
|
| 521 |
+
{
|
| 522 |
+
"name": "YPTF.MoE+Multi (noPS)",
|
| 523 |
+
"description": "Mixture of Experts multi-instrument transcription (default)",
|
| 524 |
+
"loaded": yourmt3_transcriber is not None
|
| 525 |
+
}
|
| 526 |
+
],
|
| 527 |
+
"default": "YPTF.MoE+Multi (noPS)"
|
| 528 |
+
}
|
| 529 |
+
|
| 530 |
+
|
| 531 |
if __name__ == "__main__":
|
| 532 |
import uvicorn
|
| 533 |
uvicorn.run(
|
backend/pipeline.py
CHANGED
|
@@ -34,12 +34,6 @@ except ImportError as e:
|
|
| 34 |
print(f"WARNING: madmom not available. Falling back to librosa for tempo/beat detection.")
|
| 35 |
print(f" Error: {e}")
|
| 36 |
|
| 37 |
-
try:
|
| 38 |
-
import omnizart
|
| 39 |
-
OMNIZART_AVAILABLE = True
|
| 40 |
-
except ImportError:
|
| 41 |
-
OMNIZART_AVAILABLE = False
|
| 42 |
-
print("WARNING: omnizart not installed. Install with: pip install omnizart")
|
| 43 |
|
| 44 |
|
| 45 |
class TranscriptionPipeline:
|
|
@@ -55,7 +49,7 @@ class TranscriptionPipeline:
|
|
| 55 |
|
| 56 |
# Load configuration
|
| 57 |
if config is None:
|
| 58 |
-
from
|
| 59 |
self.config = settings
|
| 60 |
else:
|
| 61 |
self.config = config
|
|
@@ -87,7 +81,13 @@ class TranscriptionPipeline:
|
|
| 87 |
midi_path = self.transcribe_to_midi(stems['other'])
|
| 88 |
|
| 89 |
self.progress(90, "musicxml", "Generating MusicXML")
|
| 90 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
|
| 92 |
self.progress(100, "complete", "Transcription complete")
|
| 93 |
return musicxml_path
|
|
@@ -184,101 +184,161 @@ class TranscriptionPipeline:
|
|
| 184 |
minimum_note_length = self.config.minimum_note_length
|
| 185 |
|
| 186 |
output_dir = self.temp_dir
|
| 187 |
-
midi_path = output_dir / "piano.mid"
|
| 188 |
-
|
| 189 |
-
print(f" Transcribing with basic-pitch (onset={onset_threshold}, frame={frame_threshold})...")
|
| 190 |
-
|
| 191 |
-
# Run basic-pitch inference
|
| 192 |
-
# predict_and_save creates output files in the output directory
|
| 193 |
-
predict_and_save(
|
| 194 |
-
audio_path_list=[str(audio_path)],
|
| 195 |
-
output_directory=str(output_dir),
|
| 196 |
-
save_midi=True,
|
| 197 |
-
sonify_midi=False, # Don't create audio
|
| 198 |
-
save_model_outputs=False, # Don't save raw outputs
|
| 199 |
-
save_notes=False, # Don't save CSV
|
| 200 |
-
model_or_model_path=ICASSP_2022_MODEL_PATH,
|
| 201 |
-
onset_threshold=onset_threshold,
|
| 202 |
-
frame_threshold=frame_threshold,
|
| 203 |
-
minimum_note_length=minimum_note_length,
|
| 204 |
-
minimum_frequency=self.config.minimum_frequency_hz, # Filter low-frequency noise (F1)
|
| 205 |
-
maximum_frequency=self.config.maximum_frequency_hz, # No upper limit
|
| 206 |
-
multiple_pitch_bends=False,
|
| 207 |
-
melodia_trick=True, # Improves monophonic melody
|
| 208 |
-
debug_file=None
|
| 209 |
-
)
|
| 210 |
-
|
| 211 |
-
# basic-pitch saves as {audio_stem}_basic_pitch.mid
|
| 212 |
-
generated_midi = output_dir / f"{audio_path.stem}_basic_pitch.mid"
|
| 213 |
-
|
| 214 |
-
if not generated_midi.exists():
|
| 215 |
-
raise RuntimeError("basic-pitch did not create MIDI file")
|
| 216 |
-
|
| 217 |
-
# Rename to expected path
|
| 218 |
-
generated_midi.rename(midi_path)
|
| 219 |
-
|
| 220 |
-
# Detect tempo from source audio for accurate post-processing
|
| 221 |
-
source_audio = self.temp_dir / "audio.wav"
|
| 222 |
-
if source_audio.exists():
|
| 223 |
-
detected_tempo, _ = self.detect_tempo_from_audio(source_audio)
|
| 224 |
-
else:
|
| 225 |
-
detected_tempo = 120.0 # Fallback
|
| 226 |
|
| 227 |
-
#
|
| 228 |
-
|
| 229 |
-
|
| 230 |
|
| 231 |
-
if
|
| 232 |
-
|
| 233 |
-
|
| 234 |
-
|
| 235 |
-
|
| 236 |
-
|
| 237 |
-
|
| 238 |
-
|
| 239 |
-
|
| 240 |
-
|
| 241 |
-
|
| 242 |
-
|
| 243 |
-
|
| 244 |
-
|
| 245 |
-
|
| 246 |
-
|
| 247 |
-
|
| 248 |
-
|
| 249 |
-
|
| 250 |
-
|
| 251 |
-
|
| 252 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 253 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 254 |
else:
|
| 255 |
-
|
| 256 |
-
|
| 257 |
-
|
| 258 |
-
|
| 259 |
-
|
| 260 |
-
|
| 261 |
-
|
| 262 |
-
|
| 263 |
-
|
| 264 |
-
|
| 265 |
-
|
| 266 |
-
|
| 267 |
-
|
| 268 |
-
|
| 269 |
-
|
| 270 |
-
|
| 271 |
-
|
| 272 |
-
|
| 273 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 274 |
)
|
| 275 |
-
else:
|
| 276 |
-
final_midi = merged_midi
|
| 277 |
|
| 278 |
-
|
| 279 |
-
|
|
|
|
| 280 |
|
| 281 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 282 |
|
| 283 |
def _get_midi_range(self, midi_path: Path) -> int:
|
| 284 |
"""
|
|
@@ -626,10 +686,11 @@ class TranscriptionPipeline:
|
|
| 626 |
mid = mido.MidiFile(midi_path)
|
| 627 |
|
| 628 |
# 3. Convert beat times (seconds) to MIDI ticks
|
|
|
|
| 629 |
seconds_per_beat = 60.0 / tempo_bpm
|
| 630 |
beat_ticks = []
|
| 631 |
for beat_time in beats:
|
| 632 |
-
ticks = int(beat_time
|
| 633 |
beat_ticks.append(ticks)
|
| 634 |
|
| 635 |
# 4. Quantize note onsets to nearest beat (preserve durations)
|
|
@@ -640,16 +701,23 @@ class TranscriptionPipeline:
|
|
| 640 |
|
| 641 |
for msg in track:
|
| 642 |
abs_time += msg.time
|
|
|
|
|
|
|
|
|
|
| 643 |
messages_with_abs_time.append((abs_time, msg))
|
| 644 |
|
| 645 |
# Quantize note_on events to nearest beat
|
| 646 |
-
note_on_times = {} # Track quantized onset times
|
|
|
|
| 647 |
|
| 648 |
for i, (abs_time, msg) in enumerate(messages_with_abs_time):
|
| 649 |
if msg.type == 'note_on' and msg.velocity > 0:
|
| 650 |
# Find nearest beat
|
| 651 |
nearest_beat = min(beat_ticks, key=lambda b: abs(b - abs_time))
|
| 652 |
|
|
|
|
|
|
|
|
|
|
| 653 |
# Update absolute time to nearest beat
|
| 654 |
messages_with_abs_time[i] = (nearest_beat, msg)
|
| 655 |
|
|
@@ -658,27 +726,44 @@ class TranscriptionPipeline:
|
|
| 658 |
|
| 659 |
elif msg.type == 'note_off' or (msg.type == 'note_on' and msg.velocity == 0):
|
| 660 |
# Preserve duration by keeping offset relative to quantized onset
|
| 661 |
-
|
| 662 |
-
|
| 663 |
-
|
| 664 |
-
|
| 665 |
-
|
|
|
|
|
|
|
| 666 |
|
| 667 |
# Keep same duration from quantized onset
|
| 668 |
new_offset = onset_time + original_duration
|
| 669 |
messages_with_abs_time[i] = (new_offset, msg)
|
| 670 |
|
| 671 |
-
del note_on_times[
|
|
|
|
| 672 |
|
| 673 |
# Rebuild track with new timings
|
| 674 |
track.clear()
|
| 675 |
previous_time = 0
|
|
|
|
| 676 |
|
| 677 |
for abs_time, msg in sorted(messages_with_abs_time, key=lambda x: x[0]):
|
|
|
|
|
|
|
|
|
|
|
|
|
| 678 |
msg.time = max(0, abs_time - previous_time)
|
| 679 |
previous_time = abs_time
|
| 680 |
track.append(msg)
|
| 681 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 682 |
# 5. Save beat-quantized MIDI
|
| 683 |
beat_sync_path = midi_path.with_stem(f"{midi_path.stem}_beat_sync")
|
| 684 |
mid.save(beat_sync_path)
|
|
@@ -1145,6 +1230,108 @@ class TranscriptionPipeline:
|
|
| 1145 |
|
| 1146 |
return output_path
|
| 1147 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1148 |
def _deduplicate_overlapping_notes(self, score) -> stream.Score:
|
| 1149 |
"""
|
| 1150 |
Deduplicate overlapping notes from basic-pitch to prevent MusicXML corruption.
|
|
@@ -1829,20 +2016,22 @@ class TranscriptionPipeline:
|
|
| 1829 |
"""
|
| 1830 |
print(f" Using madmom multi-scale tempo detection (eliminates octave errors)...")
|
| 1831 |
|
| 1832 |
-
#
|
|
|
|
|
|
|
|
|
|
| 1833 |
tempo_processor = madmom.features.tempo.TempoEstimationProcessor(fps=100)
|
| 1834 |
|
| 1835 |
# Get tempo candidates from multi-scale analysis
|
| 1836 |
-
tempo_result = tempo_processor(
|
| 1837 |
|
| 1838 |
-
# tempo_result is array
|
| 1839 |
-
# Extract
|
| 1840 |
tempos = []
|
| 1841 |
strengths = []
|
| 1842 |
-
for
|
| 1843 |
-
|
| 1844 |
-
|
| 1845 |
-
strengths.append(float(tempo_result[i + 1]))
|
| 1846 |
|
| 1847 |
if not tempos:
|
| 1848 |
print(f" WARNING: Madmom returned no tempo candidates, using default 120 BPM")
|
|
@@ -1939,21 +2128,20 @@ class TranscriptionPipeline:
|
|
| 1939 |
print(f" WARNING: madmom not available, falling back to librosa beat tracking")
|
| 1940 |
return self._detect_beats_librosa(audio_path)
|
| 1941 |
|
| 1942 |
-
print(f" Detecting beats
|
| 1943 |
|
| 1944 |
-
#
|
| 1945 |
-
|
| 1946 |
-
beats = beat_processor(str(audio_path))
|
| 1947 |
|
| 1948 |
-
#
|
| 1949 |
-
|
| 1950 |
-
|
| 1951 |
|
| 1952 |
-
#
|
| 1953 |
-
#
|
| 1954 |
-
downbeats =
|
| 1955 |
|
| 1956 |
-
print(f" Detected {len(beats)} beats, {len(downbeats)} downbeats")
|
| 1957 |
|
| 1958 |
return beats, downbeats
|
| 1959 |
|
|
|
|
| 34 |
print(f"WARNING: madmom not available. Falling back to librosa for tempo/beat detection.")
|
| 35 |
print(f" Error: {e}")
|
| 36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
|
| 39 |
class TranscriptionPipeline:
|
|
|
|
| 49 |
|
| 50 |
# Load configuration
|
| 51 |
if config is None:
|
| 52 |
+
from app_config import settings
|
| 53 |
self.config = settings
|
| 54 |
else:
|
| 55 |
self.config = config
|
|
|
|
| 81 |
midi_path = self.transcribe_to_midi(stems['other'])
|
| 82 |
|
| 83 |
self.progress(90, "musicxml", "Generating MusicXML")
|
| 84 |
+
# Use minimal generator for YourMT3+, full generator for basic-pitch
|
| 85 |
+
if self.config.use_yourmt3_transcription:
|
| 86 |
+
print(f" Using minimal MusicXML generation (YourMT3+)")
|
| 87 |
+
musicxml_path = self.generate_musicxml_minimal(midi_path, stems['other'])
|
| 88 |
+
else:
|
| 89 |
+
print(f" Using full MusicXML generation (basic-pitch)")
|
| 90 |
+
musicxml_path = self.generate_musicxml(midi_path)
|
| 91 |
|
| 92 |
self.progress(100, "complete", "Transcription complete")
|
| 93 |
return musicxml_path
|
|
|
|
| 184 |
minimum_note_length = self.config.minimum_note_length
|
| 185 |
|
| 186 |
output_dir = self.temp_dir
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 187 |
|
| 188 |
+
# === STEP 1: Try YourMT3+ first (primary transcriber) ===
|
| 189 |
+
use_yourmt3 = self.config.use_yourmt3_transcription
|
| 190 |
+
midi_path = None
|
| 191 |
|
| 192 |
+
if use_yourmt3:
|
| 193 |
+
try:
|
| 194 |
+
print(f" Transcribing with YourMT3+ (primary transcriber)...")
|
| 195 |
+
midi_path = self.transcribe_with_yourmt3(audio_path)
|
| 196 |
+
print(f" β YourMT3+ transcription complete")
|
| 197 |
+
except Exception as e:
|
| 198 |
+
import traceback
|
| 199 |
+
print(f" β YourMT3+ failed: {e}")
|
| 200 |
+
print(f" Full error: {traceback.format_exc()}")
|
| 201 |
+
print(f" β Falling back to basic-pitch")
|
| 202 |
+
midi_path = None
|
| 203 |
+
|
| 204 |
+
# === STEP 2: Fallback to basic-pitch if YourMT3+ failed or disabled ===
|
| 205 |
+
if midi_path is None:
|
| 206 |
+
print(f" Transcribing with basic-pitch (onset={onset_threshold}, frame={frame_threshold})...")
|
| 207 |
+
|
| 208 |
+
# Run basic-pitch inference
|
| 209 |
+
# predict_and_save creates output files in the output directory
|
| 210 |
+
predict_and_save(
|
| 211 |
+
audio_path_list=[str(audio_path)],
|
| 212 |
+
output_directory=str(output_dir),
|
| 213 |
+
save_midi=True,
|
| 214 |
+
sonify_midi=False, # Don't create audio
|
| 215 |
+
save_model_outputs=False, # Don't save raw outputs
|
| 216 |
+
save_notes=False, # Don't save CSV
|
| 217 |
+
model_or_model_path=ICASSP_2022_MODEL_PATH,
|
| 218 |
+
onset_threshold=onset_threshold,
|
| 219 |
+
frame_threshold=frame_threshold,
|
| 220 |
+
minimum_note_length=minimum_note_length,
|
| 221 |
+
minimum_frequency=self.config.minimum_frequency_hz, # Filter low-frequency noise (F1)
|
| 222 |
+
maximum_frequency=self.config.maximum_frequency_hz, # No upper limit
|
| 223 |
+
multiple_pitch_bends=False,
|
| 224 |
+
melodia_trick=True, # Improves monophonic melody
|
| 225 |
+
debug_file=None
|
| 226 |
)
|
| 227 |
+
|
| 228 |
+
# basic-pitch saves as {audio_stem}_basic_pitch.mid
|
| 229 |
+
generated_bp_midi = output_dir / f"{audio_path.stem}_basic_pitch.mid"
|
| 230 |
+
|
| 231 |
+
if not generated_bp_midi.exists():
|
| 232 |
+
raise RuntimeError("basic-pitch did not create MIDI file")
|
| 233 |
+
|
| 234 |
+
midi_path = generated_bp_midi
|
| 235 |
+
print(f" β basic-pitch transcription complete")
|
| 236 |
+
|
| 237 |
+
# Rename final MIDI to standard name for post-processing
|
| 238 |
+
final_midi_path = output_dir / "piano.mid"
|
| 239 |
+
if midi_path != final_midi_path:
|
| 240 |
+
midi_path.rename(final_midi_path)
|
| 241 |
+
midi_path = final_midi_path
|
| 242 |
+
|
| 243 |
+
# Conditional post-processing based on transcriber
|
| 244 |
+
if self.config.use_yourmt3_transcription:
|
| 245 |
+
# YourMT3+ produces clean MIDI - use as-is
|
| 246 |
+
print(f" Using YourMT3+ output directly (no post-processing)")
|
| 247 |
+
return midi_path
|
| 248 |
else:
|
| 249 |
+
# basic-pitch needs full post-processing pipeline
|
| 250 |
+
print(f" Applying full post-processing for basic-pitch")
|
| 251 |
+
|
| 252 |
+
# Detect tempo from source audio for accurate post-processing
|
| 253 |
+
source_audio = self.temp_dir / "audio.wav"
|
| 254 |
+
if source_audio.exists():
|
| 255 |
+
detected_tempo, _ = self.detect_tempo_from_audio(source_audio)
|
| 256 |
+
else:
|
| 257 |
+
detected_tempo = 120.0
|
| 258 |
+
|
| 259 |
+
# 1. Polyphony detection
|
| 260 |
+
range_semitones = self._get_midi_range(midi_path)
|
| 261 |
+
if range_semitones > 24:
|
| 262 |
+
# Wide range (>2 octaves) = likely polyphonic piano music
|
| 263 |
+
print(f" Detected wide range ({range_semitones} semitones), preserving all notes")
|
| 264 |
+
mono_midi = midi_path
|
| 265 |
+
else:
|
| 266 |
+
# Narrow range (β€2 octaves) = likely monophonic melody
|
| 267 |
+
print(f" Narrow range ({range_semitones} semitones), removing octave duplicates")
|
| 268 |
+
mono_midi = self.extract_monophonic_melody(midi_path)
|
| 269 |
+
|
| 270 |
+
# 2. Clean (filter, quantize)
|
| 271 |
+
cleaned_midi = self.clean_midi(mono_midi, detected_tempo)
|
| 272 |
+
|
| 273 |
+
# 3. Beat-synchronous quantization
|
| 274 |
+
if self.config.use_beat_synchronous_quantization and source_audio.exists():
|
| 275 |
+
beat_synced_midi = self.beat_synchronous_quantize(cleaned_midi, source_audio, detected_tempo)
|
| 276 |
+
else:
|
| 277 |
+
beat_synced_midi = cleaned_midi
|
| 278 |
+
|
| 279 |
+
# 4. Merge consecutive notes
|
| 280 |
+
print(f" Merging consecutive notes (gap threshold: 150ms)...")
|
| 281 |
+
merged_midi = self.merge_consecutive_notes(beat_synced_midi, gap_threshold_ms=150, tempo_bpm=detected_tempo)
|
| 282 |
+
|
| 283 |
+
# 5. Envelope analysis
|
| 284 |
+
if self.config.enable_envelope_analysis:
|
| 285 |
+
print(f" Analyzing note envelopes for sustain artifacts...")
|
| 286 |
+
final_midi = self.analyze_note_envelope_and_merge_sustains(merged_midi, tempo_bpm=detected_tempo)
|
| 287 |
+
else:
|
| 288 |
+
final_midi = merged_midi
|
| 289 |
+
|
| 290 |
+
# 6. Validate (pattern detection)
|
| 291 |
+
self.detect_repeated_note_patterns(final_midi)
|
| 292 |
+
|
| 293 |
+
return final_midi
|
| 294 |
+
|
| 295 |
+
def transcribe_with_yourmt3(self, audio_path: Path) -> Path:
|
| 296 |
+
"""
|
| 297 |
+
Transcribe audio to MIDI using YourMT3+ directly (in-process).
|
| 298 |
+
|
| 299 |
+
YourMT3+ is a state-of-the-art multi-instrument transcription model
|
| 300 |
+
that achieves 80-85% accuracy (vs 70% for basic-pitch).
|
| 301 |
+
|
| 302 |
+
Args:
|
| 303 |
+
audio_path: Path to audio file (should be 'other' stem for piano)
|
| 304 |
+
|
| 305 |
+
Returns:
|
| 306 |
+
Path to generated MIDI file
|
| 307 |
+
|
| 308 |
+
Raises:
|
| 309 |
+
RuntimeError: If transcription fails
|
| 310 |
+
"""
|
| 311 |
+
try:
|
| 312 |
+
from yourmt3_wrapper import YourMT3Transcriber
|
| 313 |
+
except ImportError:
|
| 314 |
+
# Try adding backend directory to path
|
| 315 |
+
import sys
|
| 316 |
+
from pathlib import Path as PathLib
|
| 317 |
+
backend_dir = PathLib(__file__).parent
|
| 318 |
+
if str(backend_dir) not in sys.path:
|
| 319 |
+
sys.path.insert(0, str(backend_dir))
|
| 320 |
+
from yourmt3_wrapper import YourMT3Transcriber
|
| 321 |
+
|
| 322 |
+
print(f" Transcribing with YourMT3+ (direct call, device: {self.config.yourmt3_device})...")
|
| 323 |
+
|
| 324 |
+
try:
|
| 325 |
+
# Initialize transcriber (reuses loaded model from API if available)
|
| 326 |
+
transcriber = YourMT3Transcriber(
|
| 327 |
+
model_name="YPTF.MoE+Multi (noPS)",
|
| 328 |
+
device=self.config.yourmt3_device
|
| 329 |
)
|
|
|
|
|
|
|
| 330 |
|
| 331 |
+
# Transcribe audio
|
| 332 |
+
output_dir = self.temp_dir / "yourmt3_output"
|
| 333 |
+
output_dir.mkdir(exist_ok=True)
|
| 334 |
|
| 335 |
+
midi_path = transcriber.transcribe_audio(audio_path, output_dir)
|
| 336 |
+
|
| 337 |
+
print(f" β YourMT3+ transcription complete")
|
| 338 |
+
return midi_path
|
| 339 |
+
|
| 340 |
+
except Exception as e:
|
| 341 |
+
raise RuntimeError(f"YourMT3+ transcription failed: {e}")
|
| 342 |
|
| 343 |
def _get_midi_range(self, midi_path: Path) -> int:
|
| 344 |
"""
|
|
|
|
| 686 |
mid = mido.MidiFile(midi_path)
|
| 687 |
|
| 688 |
# 3. Convert beat times (seconds) to MIDI ticks
|
| 689 |
+
# Formula: seconds * (ticks_per_beat / seconds_per_beat)
|
| 690 |
seconds_per_beat = 60.0 / tempo_bpm
|
| 691 |
beat_ticks = []
|
| 692 |
for beat_time in beats:
|
| 693 |
+
ticks = int(beat_time * mid.ticks_per_beat / seconds_per_beat)
|
| 694 |
beat_ticks.append(ticks)
|
| 695 |
|
| 696 |
# 4. Quantize note onsets to nearest beat (preserve durations)
|
|
|
|
| 701 |
|
| 702 |
for msg in track:
|
| 703 |
abs_time += msg.time
|
| 704 |
+
# Skip pitchwheel messages (not needed for notation, can cause timing issues)
|
| 705 |
+
if msg.type == 'pitchwheel':
|
| 706 |
+
continue
|
| 707 |
messages_with_abs_time.append((abs_time, msg))
|
| 708 |
|
| 709 |
# Quantize note_on events to nearest beat
|
| 710 |
+
note_on_times = {} # Track quantized onset times: (channel, note) -> quantized_time
|
| 711 |
+
note_original_times = {} # Track original onset times: (channel, note) -> original_time
|
| 712 |
|
| 713 |
for i, (abs_time, msg) in enumerate(messages_with_abs_time):
|
| 714 |
if msg.type == 'note_on' and msg.velocity > 0:
|
| 715 |
# Find nearest beat
|
| 716 |
nearest_beat = min(beat_ticks, key=lambda b: abs(b - abs_time))
|
| 717 |
|
| 718 |
+
# Store original time BEFORE quantization
|
| 719 |
+
note_original_times[(msg.channel, msg.note)] = abs_time
|
| 720 |
+
|
| 721 |
# Update absolute time to nearest beat
|
| 722 |
messages_with_abs_time[i] = (nearest_beat, msg)
|
| 723 |
|
|
|
|
| 726 |
|
| 727 |
elif msg.type == 'note_off' or (msg.type == 'note_on' and msg.velocity == 0):
|
| 728 |
# Preserve duration by keeping offset relative to quantized onset
|
| 729 |
+
key = (msg.channel, msg.note)
|
| 730 |
+
if key in note_on_times and key in note_original_times:
|
| 731 |
+
onset_time = note_on_times[key]
|
| 732 |
+
original_onset_time = note_original_times[key]
|
| 733 |
+
|
| 734 |
+
# Calculate duration using original times
|
| 735 |
+
original_duration = abs_time - original_onset_time
|
| 736 |
|
| 737 |
# Keep same duration from quantized onset
|
| 738 |
new_offset = onset_time + original_duration
|
| 739 |
messages_with_abs_time[i] = (new_offset, msg)
|
| 740 |
|
| 741 |
+
del note_on_times[key]
|
| 742 |
+
del note_original_times[key]
|
| 743 |
|
| 744 |
# Rebuild track with new timings
|
| 745 |
track.clear()
|
| 746 |
previous_time = 0
|
| 747 |
+
last_note_time = 0
|
| 748 |
|
| 749 |
for abs_time, msg in sorted(messages_with_abs_time, key=lambda x: x[0]):
|
| 750 |
+
# Skip end_of_track for now - we'll add it at the end
|
| 751 |
+
if msg.type == 'end_of_track':
|
| 752 |
+
continue
|
| 753 |
+
|
| 754 |
msg.time = max(0, abs_time - previous_time)
|
| 755 |
previous_time = abs_time
|
| 756 |
track.append(msg)
|
| 757 |
|
| 758 |
+
# Track last note time
|
| 759 |
+
if msg.type in ('note_on', 'note_off'):
|
| 760 |
+
last_note_time = abs_time
|
| 761 |
+
|
| 762 |
+
# Add end_of_track after last note with small delta
|
| 763 |
+
from mido import MetaMessage
|
| 764 |
+
end_msg = MetaMessage('end_of_track', time=10)
|
| 765 |
+
track.append(end_msg)
|
| 766 |
+
|
| 767 |
# 5. Save beat-quantized MIDI
|
| 768 |
beat_sync_path = midi_path.with_stem(f"{midi_path.stem}_beat_sync")
|
| 769 |
mid.save(beat_sync_path)
|
|
|
|
| 1230 |
|
| 1231 |
return output_path
|
| 1232 |
|
| 1233 |
+
def generate_musicxml_minimal(self, midi_path: Path, source_audio: Path) -> Path:
|
| 1234 |
+
"""
|
| 1235 |
+
Generate MusicXML from clean MIDI (YourMT3+ output) with minimal post-processing.
|
| 1236 |
+
|
| 1237 |
+
This is a simplified pipeline for YourMT3+ which produces clean, well-quantized MIDI.
|
| 1238 |
+
Skips all MIDI-level post-processing and only applies music21-level operations.
|
| 1239 |
+
|
| 1240 |
+
Steps:
|
| 1241 |
+
1. Detect tempo, time signature, key from audio
|
| 1242 |
+
2. Parse MIDI with music21
|
| 1243 |
+
3. Create measures
|
| 1244 |
+
4. Optional: Split into grand staff (treble + bass)
|
| 1245 |
+
5. Export MusicXML
|
| 1246 |
+
|
| 1247 |
+
Args:
|
| 1248 |
+
midi_path: Clean MIDI from YourMT3+ (no post-processing needed)
|
| 1249 |
+
source_audio: Audio file for metadata detection
|
| 1250 |
+
|
| 1251 |
+
Returns:
|
| 1252 |
+
Path to generated MusicXML file
|
| 1253 |
+
"""
|
| 1254 |
+
from music21 import converter, tempo, meter, clef
|
| 1255 |
+
|
| 1256 |
+
self.progress(92, "musicxml", "Detecting metadata from audio")
|
| 1257 |
+
|
| 1258 |
+
# Step 1: Detect metadata from audio
|
| 1259 |
+
if source_audio.exists():
|
| 1260 |
+
# Detect tempo
|
| 1261 |
+
detected_tempo, tempo_confidence = self.detect_tempo_from_audio(source_audio)
|
| 1262 |
+
# Detect time signature
|
| 1263 |
+
time_sig_num, time_sig_denom, ts_confidence = self.detect_time_signature(source_audio, detected_tempo)
|
| 1264 |
+
else:
|
| 1265 |
+
print(" WARNING: Audio file not found, using defaults")
|
| 1266 |
+
detected_tempo, tempo_confidence = 120.0, 0.0
|
| 1267 |
+
time_sig_num, time_sig_denom, ts_confidence = 4, 4, 0.0
|
| 1268 |
+
|
| 1269 |
+
print(f" Detected: {detected_tempo} BPM (confidence: {tempo_confidence:.2f})")
|
| 1270 |
+
print(f" Detected: {time_sig_num}/{time_sig_denom} time (confidence: {ts_confidence:.2f})")
|
| 1271 |
+
|
| 1272 |
+
self.progress(93, "musicxml", "Parsing MIDI")
|
| 1273 |
+
|
| 1274 |
+
# Step 2: Parse MIDI
|
| 1275 |
+
score = converter.parse(midi_path)
|
| 1276 |
+
|
| 1277 |
+
self.progress(94, "musicxml", "Detecting key signature")
|
| 1278 |
+
|
| 1279 |
+
# Step 3: Detect key signature
|
| 1280 |
+
detected_key, key_confidence = self.detect_key_ensemble(score, source_audio)
|
| 1281 |
+
print(f" Detected key: {detected_key} (confidence: {key_confidence:.2f})")
|
| 1282 |
+
|
| 1283 |
+
self.progress(96, "musicxml", "Creating measures")
|
| 1284 |
+
|
| 1285 |
+
# Step 4: Create measures
|
| 1286 |
+
score = score.makeMeasures()
|
| 1287 |
+
|
| 1288 |
+
# Step 5: Grand staff split (optional)
|
| 1289 |
+
if self.config.enable_grand_staff:
|
| 1290 |
+
print(f" Splitting into grand staff (split at MIDI note {self.config.middle_c_split})...")
|
| 1291 |
+
score = self._split_into_grand_staff(score)
|
| 1292 |
+
print(f" Created {len(score.parts)} staves (treble + bass)")
|
| 1293 |
+
|
| 1294 |
+
# Insert metadata into each part
|
| 1295 |
+
for part in score.parts:
|
| 1296 |
+
measures = part.getElementsByClass('Measure')
|
| 1297 |
+
if measures:
|
| 1298 |
+
first_measure = measures[0]
|
| 1299 |
+
first_measure.insert(0, tempo.MetronomeMark(number=detected_tempo))
|
| 1300 |
+
first_measure.insert(0, detected_key)
|
| 1301 |
+
first_measure.insert(0, meter.TimeSignature(f'{time_sig_num}/{time_sig_denom}'))
|
| 1302 |
+
else:
|
| 1303 |
+
# Single staff: add treble clef and metadata
|
| 1304 |
+
for part in score.parts:
|
| 1305 |
+
part.insert(0, clef.TrebleClef())
|
| 1306 |
+
part.insert(0, detected_key)
|
| 1307 |
+
part.insert(0, meter.TimeSignature(f'{time_sig_num}/{time_sig_denom}'))
|
| 1308 |
+
part.insert(0, tempo.MetronomeMark(number=detected_tempo))
|
| 1309 |
+
part.partName = "Piano"
|
| 1310 |
+
|
| 1311 |
+
self.progress(97, "musicxml", "Normalizing durations")
|
| 1312 |
+
|
| 1313 |
+
# Step 5.5: Fix any impossible durations that music21 can't export
|
| 1314 |
+
# YourMT3+ output is clean, but music21 has limitations on complex durations
|
| 1315 |
+
score = self._remove_impossible_durations(score)
|
| 1316 |
+
|
| 1317 |
+
self.progress(98, "musicxml", "Exporting MusicXML")
|
| 1318 |
+
|
| 1319 |
+
# Step 6: Export MusicXML
|
| 1320 |
+
output_path = self.temp_dir / f"{self.job_id}.musicxml"
|
| 1321 |
+
|
| 1322 |
+
print(f" Writing MusicXML to {output_path}...")
|
| 1323 |
+
try:
|
| 1324 |
+
score.write('musicxml', fp=str(output_path), makeNotation=False)
|
| 1325 |
+
except Exception as e:
|
| 1326 |
+
# If export still fails due to complex durations, try with makeNotation=True
|
| 1327 |
+
# This lets music21 handle the complex durations automatically
|
| 1328 |
+
print(f" WARNING: Export failed with makeNotation=False: {e}")
|
| 1329 |
+
print(f" Retrying with makeNotation=True (auto-notation)...")
|
| 1330 |
+
score.write('musicxml', fp=str(output_path), makeNotation=True)
|
| 1331 |
+
|
| 1332 |
+
print(f" β MusicXML generation complete")
|
| 1333 |
+
return output_path
|
| 1334 |
+
|
| 1335 |
def _deduplicate_overlapping_notes(self, score) -> stream.Score:
|
| 1336 |
"""
|
| 1337 |
Deduplicate overlapping notes from basic-pitch to prevent MusicXML corruption.
|
|
|
|
| 2016 |
"""
|
| 2017 |
print(f" Using madmom multi-scale tempo detection (eliminates octave errors)...")
|
| 2018 |
|
| 2019 |
+
# Process audio to get beat activations
|
| 2020 |
+
act = madmom.features.beats.RNNBeatProcessor()(str(audio_path))
|
| 2021 |
+
|
| 2022 |
+
# Multi-scale tempo processor (operates on activations, not raw audio)
|
| 2023 |
tempo_processor = madmom.features.tempo.TempoEstimationProcessor(fps=100)
|
| 2024 |
|
| 2025 |
# Get tempo candidates from multi-scale analysis
|
| 2026 |
+
tempo_result = tempo_processor(act)
|
| 2027 |
|
| 2028 |
+
# tempo_result is 2D array where each row is [tempo_bpm, strength]
|
| 2029 |
+
# Extract candidates
|
| 2030 |
tempos = []
|
| 2031 |
strengths = []
|
| 2032 |
+
for row in tempo_result:
|
| 2033 |
+
tempos.append(float(row[0])) # tempo in BPM
|
| 2034 |
+
strengths.append(float(row[1])) # strength/confidence
|
|
|
|
| 2035 |
|
| 2036 |
if not tempos:
|
| 2037 |
print(f" WARNING: Madmom returned no tempo candidates, using default 120 BPM")
|
|
|
|
| 2128 |
print(f" WARNING: madmom not available, falling back to librosa beat tracking")
|
| 2129 |
return self._detect_beats_librosa(audio_path)
|
| 2130 |
|
| 2131 |
+
print(f" Detecting beats with madmom...")
|
| 2132 |
|
| 2133 |
+
# Process audio to get beat activations
|
| 2134 |
+
beat_act = madmom.features.beats.RNNBeatProcessor()(str(audio_path))
|
|
|
|
| 2135 |
|
| 2136 |
+
# Beat tracking processor (operates on activations)
|
| 2137 |
+
beat_processor = madmom.features.beats.BeatTrackingProcessor(fps=100)
|
| 2138 |
+
beats = beat_processor(beat_act)
|
| 2139 |
|
| 2140 |
+
# Estimate downbeats (every 4th beat for 4/4 time - simple heuristic)
|
| 2141 |
+
# More sophisticated downbeat detection with madmom can be added later if needed
|
| 2142 |
+
downbeats = beats[::4] if len(beats) > 0 else np.array([])
|
| 2143 |
|
| 2144 |
+
print(f" Detected {len(beats)} beats, {len(downbeats)} estimated downbeats")
|
| 2145 |
|
| 2146 |
return beats, downbeats
|
| 2147 |
|
backend/requirements.txt
CHANGED
|
@@ -12,16 +12,23 @@ redis==5.2.1
|
|
| 12 |
yt-dlp>=2025.12.8
|
| 13 |
soundfile==0.12.1
|
| 14 |
librosa>=0.11.0
|
|
|
|
| 15 |
madmom>=0.16.1 # Zero-tradeoff: Beat tracking and multi-scale tempo detection
|
| 16 |
scipy
|
| 17 |
torch>=2.0.0
|
| 18 |
torchaudio>=2.9.1
|
| 19 |
-
torchcodec>=0.9.1
|
| 20 |
demucs>=3.0.6
|
| 21 |
|
| 22 |
# Pitch detection (macOS default runtime is CoreML)
|
| 23 |
-
basic-pitch==0.4.0 #
|
| 24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
# Music Processing
|
| 27 |
music21==9.3.0
|
|
|
|
| 12 |
yt-dlp>=2025.12.8
|
| 13 |
soundfile==0.12.1
|
| 14 |
librosa>=0.11.0
|
| 15 |
+
Cython # Required by madmom
|
| 16 |
madmom>=0.16.1 # Zero-tradeoff: Beat tracking and multi-scale tempo detection
|
| 17 |
scipy
|
| 18 |
torch>=2.0.0
|
| 19 |
torchaudio>=2.9.1
|
|
|
|
| 20 |
demucs>=3.0.6
|
| 21 |
|
| 22 |
# Pitch detection (macOS default runtime is CoreML)
|
| 23 |
+
basic-pitch==0.4.0 # Fallback transcriber when YourMT3+ service unavailable
|
| 24 |
+
|
| 25 |
+
# YourMT3+ Transcription (integrated into main service)
|
| 26 |
+
lightning>=2.2.1
|
| 27 |
+
transformers==4.45.1
|
| 28 |
+
einops>=0.7.0
|
| 29 |
+
deprecated
|
| 30 |
+
wandb>=0.15.0
|
| 31 |
+
gradio_log
|
| 32 |
|
| 33 |
# Music Processing
|
| 34 |
music21==9.3.0
|
backend/scripts/diagnose_pipeline.py
CHANGED
|
@@ -17,7 +17,7 @@ import mido
|
|
| 17 |
# Add parent directory to path for imports
|
| 18 |
sys.path.insert(0, str(Path(__file__).parent.parent))
|
| 19 |
|
| 20 |
-
from
|
| 21 |
|
| 22 |
|
| 23 |
def analyze_audio_file(audio_path: Path, label: str):
|
|
|
|
| 17 |
# Add parent directory to path for imports
|
| 18 |
sys.path.insert(0, str(Path(__file__).parent.parent))
|
| 19 |
|
| 20 |
+
from app_config import settings
|
| 21 |
|
| 22 |
|
| 23 |
def analyze_audio_file(audio_path: Path, label: str):
|
backend/scripts/test_accuracy.py
CHANGED
|
@@ -9,7 +9,7 @@ from pathlib import Path
|
|
| 9 |
sys.path.insert(0, str(Path(__file__).parent.parent))
|
| 10 |
|
| 11 |
from pipeline import TranscriptionPipeline
|
| 12 |
-
from
|
| 13 |
import json
|
| 14 |
from datetime import datetime
|
| 15 |
|
|
|
|
| 9 |
sys.path.insert(0, str(Path(__file__).parent.parent))
|
| 10 |
|
| 11 |
from pipeline import TranscriptionPipeline
|
| 12 |
+
from app_config import settings
|
| 13 |
import json
|
| 14 |
from datetime import datetime
|
| 15 |
|
backend/scripts/test_e2e.py
CHANGED
|
@@ -15,7 +15,7 @@ from pathlib import Path
|
|
| 15 |
sys.path.insert(0, str(Path(__file__).parent.parent))
|
| 16 |
|
| 17 |
from pipeline import TranscriptionPipeline
|
| 18 |
-
from
|
| 19 |
import time
|
| 20 |
|
| 21 |
|
|
|
|
| 15 |
sys.path.insert(0, str(Path(__file__).parent.parent))
|
| 16 |
|
| 17 |
from pipeline import TranscriptionPipeline
|
| 18 |
+
from app_config import settings
|
| 19 |
import time
|
| 20 |
|
| 21 |
|
backend/scripts/test_mps_performance.sh
ADDED
|
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
# Test MPS performance with optimizations
|
| 3 |
+
|
| 4 |
+
echo "====================================="
|
| 5 |
+
echo "YourMT3+ MPS Performance Test"
|
| 6 |
+
echo "====================================="
|
| 7 |
+
echo ""
|
| 8 |
+
|
| 9 |
+
# Start service in background
|
| 10 |
+
echo "Starting transcription service with MPS + float16..."
|
| 11 |
+
cd /Users/calebhan/Documents/Coding/Personal/rescored/backend/transcription-service
|
| 12 |
+
source ../backend/.venv/bin/activate 2>/dev/null || true
|
| 13 |
+
python service.py > service.log 2>&1 &
|
| 14 |
+
SERVICE_PID=$!
|
| 15 |
+
|
| 16 |
+
echo "Service PID: $SERVICE_PID"
|
| 17 |
+
echo "Waiting for service to initialize (30s)..."
|
| 18 |
+
sleep 30
|
| 19 |
+
|
| 20 |
+
# Check health
|
| 21 |
+
echo ""
|
| 22 |
+
echo "Checking service health..."
|
| 23 |
+
curl -s http://localhost:8001/health | python -m json.tool
|
| 24 |
+
|
| 25 |
+
# Run test transcription with timing
|
| 26 |
+
echo ""
|
| 27 |
+
echo "Running test transcription..."
|
| 28 |
+
echo "Audio file: ../../audio.wav"
|
| 29 |
+
echo ""
|
| 30 |
+
|
| 31 |
+
START_TIME=$(date +%s)
|
| 32 |
+
curl -X POST "http://localhost:8001/transcribe" \
|
| 33 |
+
-F "file=@../../audio.wav" \
|
| 34 |
+
--output test_mps_output.mid \
|
| 35 |
+
--max-time 600
|
| 36 |
+
|
| 37 |
+
END_TIME=$(date +%s)
|
| 38 |
+
ELAPSED=$((END_TIME - START_TIME))
|
| 39 |
+
|
| 40 |
+
echo ""
|
| 41 |
+
echo "====================================="
|
| 42 |
+
echo "Results:"
|
| 43 |
+
echo "====================================="
|
| 44 |
+
echo "Processing time: ${ELAPSED}s"
|
| 45 |
+
echo "MIDI output size: $(ls -lh test_mps_output.mid 2>/dev/null | awk '{print $5}')"
|
| 46 |
+
echo ""
|
| 47 |
+
echo "Service log (last 20 lines):"
|
| 48 |
+
tail -20 service.log
|
| 49 |
+
echo ""
|
| 50 |
+
echo "====================================="
|
| 51 |
+
|
| 52 |
+
# Cleanup
|
| 53 |
+
echo "Stopping service (PID: $SERVICE_PID)..."
|
| 54 |
+
kill $SERVICE_PID 2>/dev/null || true
|
| 55 |
+
echo "Done!"
|
backend/scripts/test_quick_verify.py
CHANGED
|
@@ -113,7 +113,7 @@ def main():
|
|
| 113 |
print(f" - {r['video_id']:20s} | {error_preview}")
|
| 114 |
|
| 115 |
# Save results
|
| 116 |
-
from
|
| 117 |
output_path = Path(settings.storage_path) / "quick_verify_results.json"
|
| 118 |
output_path.parent.mkdir(parents=True, exist_ok=True)
|
| 119 |
|
|
|
|
| 113 |
print(f" - {r['video_id']:20s} | {error_preview}")
|
| 114 |
|
| 115 |
# Save results
|
| 116 |
+
from app_config import settings
|
| 117 |
output_path = Path(settings.storage_path) / "quick_verify_results.json"
|
| 118 |
output_path.parent.mkdir(parents=True, exist_ok=True)
|
| 119 |
|
backend/tasks.py
CHANGED
|
@@ -6,7 +6,7 @@ import redis
|
|
| 6 |
import json
|
| 7 |
from datetime import datetime
|
| 8 |
from pathlib import Path
|
| 9 |
-
from
|
| 10 |
import shutil
|
| 11 |
|
| 12 |
# Redis client
|
|
@@ -26,6 +26,7 @@ class TranscriptionTask(Task):
|
|
| 26 |
stage: Current stage name
|
| 27 |
message: Status message
|
| 28 |
"""
|
|
|
|
| 29 |
job_key = f"job:{job_id}"
|
| 30 |
|
| 31 |
# Update Redis hash
|
|
@@ -45,7 +46,8 @@ class TranscriptionTask(Task):
|
|
| 45 |
"message": message,
|
| 46 |
"timestamp": datetime.utcnow().isoformat(),
|
| 47 |
}
|
| 48 |
-
redis_client.publish(f"job:{job_id}:updates", json.dumps(update))
|
|
|
|
| 49 |
|
| 50 |
|
| 51 |
@celery_app.task(base=TranscriptionTask, bind=True)
|
|
@@ -109,8 +111,8 @@ def process_transcription_task(self, job_id: str):
|
|
| 109 |
redis_client.hset(f"job:{job_id}", mapping={
|
| 110 |
"status": "completed",
|
| 111 |
"progress": 100,
|
| 112 |
-
"output_path": str(output_path),
|
| 113 |
-
"midi_path": str(midi_path) if temp_midi_path.exists() else "",
|
| 114 |
"completed_at": datetime.utcnow().isoformat(),
|
| 115 |
})
|
| 116 |
|
|
|
|
| 6 |
import json
|
| 7 |
from datetime import datetime
|
| 8 |
from pathlib import Path
|
| 9 |
+
from app_config import settings
|
| 10 |
import shutil
|
| 11 |
|
| 12 |
# Redis client
|
|
|
|
| 26 |
stage: Current stage name
|
| 27 |
message: Status message
|
| 28 |
"""
|
| 29 |
+
print(f"[PROGRESS] {progress}% - {stage} - {message}")
|
| 30 |
job_key = f"job:{job_id}"
|
| 31 |
|
| 32 |
# Update Redis hash
|
|
|
|
| 46 |
"message": message,
|
| 47 |
"timestamp": datetime.utcnow().isoformat(),
|
| 48 |
}
|
| 49 |
+
num_subscribers = redis_client.publish(f"job:{job_id}:updates", json.dumps(update))
|
| 50 |
+
print(f"[PROGRESS] Published to {num_subscribers} subscribers")
|
| 51 |
|
| 52 |
|
| 53 |
@celery_app.task(base=TranscriptionTask, bind=True)
|
|
|
|
| 111 |
redis_client.hset(f"job:{job_id}", mapping={
|
| 112 |
"status": "completed",
|
| 113 |
"progress": 100,
|
| 114 |
+
"output_path": str(output_path.absolute()),
|
| 115 |
+
"midi_path": str(midi_path.absolute()) if temp_midi_path.exists() else "",
|
| 116 |
"completed_at": datetime.utcnow().isoformat(),
|
| 117 |
})
|
| 118 |
|
backend/tests/test_pipeline_fixes.py
CHANGED
|
@@ -4,7 +4,7 @@ from pathlib import Path
|
|
| 4 |
import mido
|
| 5 |
from music21 import note, chord, stream, converter
|
| 6 |
from pipeline import TranscriptionPipeline
|
| 7 |
-
from
|
| 8 |
|
| 9 |
|
| 10 |
@pytest.fixture
|
|
|
|
| 4 |
import mido
|
| 5 |
from music21 import note, chord, stream, converter
|
| 6 |
from pipeline import TranscriptionPipeline
|
| 7 |
+
from app_config import Settings
|
| 8 |
|
| 9 |
|
| 10 |
@pytest.fixture
|
backend/tests/test_pipeline_monophonic.py
CHANGED
|
@@ -3,7 +3,7 @@ import pytest
|
|
| 3 |
import mido
|
| 4 |
from pathlib import Path
|
| 5 |
from pipeline import TranscriptionPipeline
|
| 6 |
-
from
|
| 7 |
|
| 8 |
|
| 9 |
@pytest.fixture
|
|
|
|
| 3 |
import mido
|
| 4 |
from pathlib import Path
|
| 5 |
from pipeline import TranscriptionPipeline
|
| 6 |
+
from app_config import Settings
|
| 7 |
|
| 8 |
|
| 9 |
@pytest.fixture
|
backend/tests/test_utils.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""Unit tests for utility functions."""
|
| 2 |
import pytest
|
| 3 |
-
from
|
| 4 |
from unittest.mock import patch, MagicMock
|
| 5 |
import yt_dlp
|
| 6 |
|
|
|
|
| 1 |
"""Unit tests for utility functions."""
|
| 2 |
import pytest
|
| 3 |
+
from app_utils import validate_youtube_url, check_video_availability
|
| 4 |
from unittest.mock import patch, MagicMock
|
| 5 |
import yt_dlp
|
| 6 |
|
backend/tests/test_yourmt3_integration.py
ADDED
|
@@ -0,0 +1,296 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Tests for YourMT3+ transcription service integration.
|
| 3 |
+
|
| 4 |
+
Tests cover:
|
| 5 |
+
- YourMT3+ service health check
|
| 6 |
+
- Successful transcription
|
| 7 |
+
- Fallback to basic-pitch on service failure
|
| 8 |
+
- Fallback to basic-pitch when service disabled
|
| 9 |
+
"""
|
| 10 |
+
import pytest
|
| 11 |
+
from pathlib import Path
|
| 12 |
+
from unittest.mock import Mock, patch, MagicMock
|
| 13 |
+
import mido
|
| 14 |
+
import tempfile
|
| 15 |
+
import shutil
|
| 16 |
+
|
| 17 |
+
from pipeline import TranscriptionPipeline
|
| 18 |
+
from app_config import Settings
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
@pytest.fixture
|
| 22 |
+
def temp_storage():
|
| 23 |
+
"""Create temporary storage directory for tests."""
|
| 24 |
+
temp_dir = Path(tempfile.mkdtemp())
|
| 25 |
+
yield temp_dir
|
| 26 |
+
shutil.rmtree(temp_dir)
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
@pytest.fixture
|
| 30 |
+
def test_audio_file(temp_storage):
|
| 31 |
+
"""Create a minimal test audio file."""
|
| 32 |
+
import soundfile as sf
|
| 33 |
+
import numpy as np
|
| 34 |
+
|
| 35 |
+
audio_path = temp_storage / "test_audio.wav"
|
| 36 |
+
# Create 1 second of silence
|
| 37 |
+
sample_rate = 44100
|
| 38 |
+
audio_data = np.zeros(sample_rate)
|
| 39 |
+
sf.write(str(audio_path), audio_data, sample_rate)
|
| 40 |
+
|
| 41 |
+
return audio_path
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
@pytest.fixture
|
| 45 |
+
def mock_yourmt3_midi(temp_storage):
|
| 46 |
+
"""Create a mock MIDI file that YourMT3+ would return."""
|
| 47 |
+
midi_path = temp_storage / "yourmt3_output.mid"
|
| 48 |
+
|
| 49 |
+
# Create a simple MIDI file with one note
|
| 50 |
+
mid = mido.MidiFile()
|
| 51 |
+
track = mido.MidiTrack()
|
| 52 |
+
mid.tracks.append(track)
|
| 53 |
+
|
| 54 |
+
track.append(mido.Message('note_on', note=60, velocity=80, time=0))
|
| 55 |
+
track.append(mido.Message('note_off', note=60, velocity=0, time=480))
|
| 56 |
+
track.append(mido.MetaMessage('end_of_track', time=0))
|
| 57 |
+
|
| 58 |
+
mid.save(str(midi_path))
|
| 59 |
+
return midi_path
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
@pytest.fixture
|
| 63 |
+
def mock_basic_pitch_midi(temp_storage):
|
| 64 |
+
"""Create a mock MIDI file that basic-pitch would return."""
|
| 65 |
+
midi_path = temp_storage / "basic_pitch_output.mid"
|
| 66 |
+
|
| 67 |
+
# Create a simple MIDI file with one note
|
| 68 |
+
mid = mido.MidiFile()
|
| 69 |
+
track = mido.MidiTrack()
|
| 70 |
+
mid.tracks.append(track)
|
| 71 |
+
|
| 72 |
+
track.append(mido.Message('note_on', note=62, velocity=70, time=0))
|
| 73 |
+
track.append(mido.Message('note_off', note=62, velocity=0, time=480))
|
| 74 |
+
track.append(mido.MetaMessage('end_of_track', time=0))
|
| 75 |
+
|
| 76 |
+
mid.save(str(midi_path))
|
| 77 |
+
return midi_path
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
class TestYourMT3Integration:
|
| 81 |
+
"""Test suite for YourMT3+ transcription service integration."""
|
| 82 |
+
|
| 83 |
+
def test_yourmt3_enabled_by_default(self):
|
| 84 |
+
"""Test that YourMT3+ is enabled by default in config."""
|
| 85 |
+
config = Settings()
|
| 86 |
+
assert config.use_yourmt3_transcription is True
|
| 87 |
+
|
| 88 |
+
def test_yourmt3_service_health_check(self, temp_storage):
|
| 89 |
+
"""Test YourMT3+ service health check endpoint."""
|
| 90 |
+
config = Settings(use_yourmt3_transcription=True)
|
| 91 |
+
pipeline = TranscriptionPipeline(
|
| 92 |
+
job_id="test_health",
|
| 93 |
+
youtube_url="https://youtube.com/test",
|
| 94 |
+
storage_path=temp_storage,
|
| 95 |
+
config=config
|
| 96 |
+
)
|
| 97 |
+
|
| 98 |
+
with patch('requests.get') as mock_get:
|
| 99 |
+
# Mock successful health check
|
| 100 |
+
mock_response = Mock()
|
| 101 |
+
mock_response.json.return_value = {
|
| 102 |
+
"status": "healthy",
|
| 103 |
+
"model_loaded": True,
|
| 104 |
+
"device": "mps"
|
| 105 |
+
}
|
| 106 |
+
mock_response.raise_for_status = Mock()
|
| 107 |
+
mock_get.return_value = mock_response
|
| 108 |
+
|
| 109 |
+
# Call transcribe_with_yourmt3 (which includes health check)
|
| 110 |
+
with patch('requests.post') as mock_post:
|
| 111 |
+
mock_post_response = Mock()
|
| 112 |
+
mock_post_response.content = b"mock midi data"
|
| 113 |
+
mock_post.return_value = mock_post_response
|
| 114 |
+
|
| 115 |
+
with patch('builtins.open', create=True):
|
| 116 |
+
with patch('pathlib.Path.exists', return_value=True):
|
| 117 |
+
# This would fail in real scenario, but we're testing health check
|
| 118 |
+
try:
|
| 119 |
+
pipeline.transcribe_with_yourmt3(temp_storage / "test.wav")
|
| 120 |
+
except:
|
| 121 |
+
pass # Expected to fail, we just want to verify health check was called
|
| 122 |
+
|
| 123 |
+
# Verify health check was called
|
| 124 |
+
assert mock_get.called
|
| 125 |
+
assert "/health" in str(mock_get.call_args)
|
| 126 |
+
|
| 127 |
+
def test_yourmt3_transcription_success(self, temp_storage, test_audio_file, mock_yourmt3_midi):
|
| 128 |
+
"""Test successful YourMT3+ transcription."""
|
| 129 |
+
config = Settings(use_yourmt3_transcription=True)
|
| 130 |
+
pipeline = TranscriptionPipeline(
|
| 131 |
+
job_id="test_success",
|
| 132 |
+
youtube_url="https://youtube.com/test",
|
| 133 |
+
storage_path=temp_storage,
|
| 134 |
+
config=config
|
| 135 |
+
)
|
| 136 |
+
|
| 137 |
+
with patch('requests.get') as mock_get:
|
| 138 |
+
# Mock successful health check
|
| 139 |
+
mock_health = Mock()
|
| 140 |
+
mock_health.json.return_value = {"status": "healthy", "model_loaded": True}
|
| 141 |
+
mock_health.raise_for_status = Mock()
|
| 142 |
+
mock_get.return_value = mock_health
|
| 143 |
+
|
| 144 |
+
with patch('requests.post') as mock_post:
|
| 145 |
+
# Mock successful transcription
|
| 146 |
+
with open(mock_yourmt3_midi, 'rb') as f:
|
| 147 |
+
mock_midi_data = f.read()
|
| 148 |
+
|
| 149 |
+
mock_response = Mock()
|
| 150 |
+
mock_response.content = mock_midi_data
|
| 151 |
+
mock_post.return_value = mock_response
|
| 152 |
+
|
| 153 |
+
result = pipeline.transcribe_with_yourmt3(test_audio_file)
|
| 154 |
+
|
| 155 |
+
assert result.exists()
|
| 156 |
+
assert result.suffix == '.mid'
|
| 157 |
+
|
| 158 |
+
# Verify MIDI file is valid
|
| 159 |
+
mid = mido.MidiFile(result)
|
| 160 |
+
assert len(mid.tracks) > 0
|
| 161 |
+
|
| 162 |
+
def test_yourmt3_fallback_on_service_error(self, temp_storage, test_audio_file):
|
| 163 |
+
"""Test fallback to basic-pitch when YourMT3+ service fails."""
|
| 164 |
+
config = Settings(use_yourmt3_transcription=True)
|
| 165 |
+
pipeline = TranscriptionPipeline(
|
| 166 |
+
job_id="test_fallback",
|
| 167 |
+
youtube_url="https://youtube.com/test",
|
| 168 |
+
storage_path=temp_storage,
|
| 169 |
+
config=config
|
| 170 |
+
)
|
| 171 |
+
|
| 172 |
+
with patch('requests.get') as mock_get:
|
| 173 |
+
# Mock health check failure
|
| 174 |
+
mock_get.side_effect = Exception("Service unavailable")
|
| 175 |
+
|
| 176 |
+
with patch('basic_pitch.inference.predict_and_save') as mock_bp:
|
| 177 |
+
# Mock basic-pitch creating a MIDI file
|
| 178 |
+
def create_basic_pitch_midi(*args, **kwargs):
|
| 179 |
+
output_dir = Path(kwargs['output_directory'])
|
| 180 |
+
audio_path = Path(kwargs['audio_path_list'][0])
|
| 181 |
+
midi_path = output_dir / f"{audio_path.stem}_basic_pitch.mid"
|
| 182 |
+
|
| 183 |
+
# Create simple MIDI
|
| 184 |
+
mid = mido.MidiFile()
|
| 185 |
+
track = mido.MidiTrack()
|
| 186 |
+
mid.tracks.append(track)
|
| 187 |
+
track.append(mido.Message('note_on', note=64, velocity=75, time=0))
|
| 188 |
+
track.append(mido.Message('note_off', note=64, velocity=0, time=480))
|
| 189 |
+
track.append(mido.MetaMessage('end_of_track', time=0))
|
| 190 |
+
mid.save(str(midi_path))
|
| 191 |
+
|
| 192 |
+
mock_bp.side_effect = create_basic_pitch_midi
|
| 193 |
+
|
| 194 |
+
# This should use basic-pitch as fallback
|
| 195 |
+
result = pipeline.transcribe_to_midi(
|
| 196 |
+
audio_path=test_audio_file
|
| 197 |
+
)
|
| 198 |
+
|
| 199 |
+
assert result.exists()
|
| 200 |
+
assert result.suffix == '.mid'
|
| 201 |
+
|
| 202 |
+
# Verify basic-pitch was called
|
| 203 |
+
assert mock_bp.called
|
| 204 |
+
|
| 205 |
+
def test_yourmt3_disabled_uses_basic_pitch(self, temp_storage, test_audio_file):
|
| 206 |
+
"""Test that basic-pitch is used when YourMT3+ is disabled."""
|
| 207 |
+
config = Settings(use_yourmt3_transcription=False)
|
| 208 |
+
pipeline = TranscriptionPipeline(
|
| 209 |
+
job_id="test_disabled",
|
| 210 |
+
youtube_url="https://youtube.com/test",
|
| 211 |
+
storage_path=temp_storage,
|
| 212 |
+
config=config
|
| 213 |
+
)
|
| 214 |
+
|
| 215 |
+
with patch('basic_pitch.inference.predict_and_save') as mock_bp:
|
| 216 |
+
# Mock basic-pitch creating a MIDI file
|
| 217 |
+
def create_basic_pitch_midi(*args, **kwargs):
|
| 218 |
+
output_dir = Path(kwargs['output_directory'])
|
| 219 |
+
audio_path = Path(kwargs['audio_path_list'][0])
|
| 220 |
+
midi_path = output_dir / f"{audio_path.stem}_basic_pitch.mid"
|
| 221 |
+
|
| 222 |
+
# Create simple MIDI
|
| 223 |
+
mid = mido.MidiFile()
|
| 224 |
+
track = mido.MidiTrack()
|
| 225 |
+
mid.tracks.append(track)
|
| 226 |
+
track.append(mido.Message('note_on', note=65, velocity=78, time=0))
|
| 227 |
+
track.append(mido.Message('note_off', note=65, velocity=0, time=480))
|
| 228 |
+
track.append(mido.MetaMessage('end_of_track', time=0))
|
| 229 |
+
mid.save(str(midi_path))
|
| 230 |
+
|
| 231 |
+
mock_bp.side_effect = create_basic_pitch_midi
|
| 232 |
+
|
| 233 |
+
result = pipeline.transcribe_to_midi(
|
| 234 |
+
audio_path=test_audio_file
|
| 235 |
+
)
|
| 236 |
+
|
| 237 |
+
assert result.exists()
|
| 238 |
+
assert result.suffix == '.mid'
|
| 239 |
+
|
| 240 |
+
# Verify basic-pitch was called and YourMT3+ was not
|
| 241 |
+
assert mock_bp.called
|
| 242 |
+
|
| 243 |
+
def test_yourmt3_service_timeout(self, temp_storage, test_audio_file):
|
| 244 |
+
"""Test that timeouts are handled gracefully with fallback."""
|
| 245 |
+
config = Settings(
|
| 246 |
+
use_yourmt3_transcription=True,
|
| 247 |
+
transcription_service_timeout=5
|
| 248 |
+
)
|
| 249 |
+
pipeline = TranscriptionPipeline(
|
| 250 |
+
job_id="test_timeout",
|
| 251 |
+
youtube_url="https://youtube.com/test",
|
| 252 |
+
storage_path=temp_storage,
|
| 253 |
+
config=config
|
| 254 |
+
)
|
| 255 |
+
|
| 256 |
+
import requests
|
| 257 |
+
|
| 258 |
+
with patch('requests.get') as mock_get:
|
| 259 |
+
# Mock health check success
|
| 260 |
+
mock_health = Mock()
|
| 261 |
+
mock_health.json.return_value = {"status": "healthy", "model_loaded": True}
|
| 262 |
+
mock_get.return_value = mock_health
|
| 263 |
+
|
| 264 |
+
with patch('requests.post') as mock_post:
|
| 265 |
+
# Mock timeout
|
| 266 |
+
mock_post.side_effect = requests.exceptions.Timeout()
|
| 267 |
+
|
| 268 |
+
with patch('basic_pitch.inference.predict_and_save') as mock_bp:
|
| 269 |
+
# Mock basic-pitch creating a MIDI file
|
| 270 |
+
def create_basic_pitch_midi(*args, **kwargs):
|
| 271 |
+
output_dir = Path(kwargs['output_directory'])
|
| 272 |
+
audio_path = Path(kwargs['audio_path_list'][0])
|
| 273 |
+
midi_path = output_dir / f"{audio_path.stem}_basic_pitch.mid"
|
| 274 |
+
|
| 275 |
+
# Create simple MIDI
|
| 276 |
+
mid = mido.MidiFile()
|
| 277 |
+
track = mido.MidiTrack()
|
| 278 |
+
mid.tracks.append(track)
|
| 279 |
+
track.append(mido.Message('note_on', note=66, velocity=80, time=0))
|
| 280 |
+
track.append(mido.Message('note_off', note=66, velocity=0, time=480))
|
| 281 |
+
track.append(mido.MetaMessage('end_of_track', time=0))
|
| 282 |
+
mid.save(str(midi_path))
|
| 283 |
+
|
| 284 |
+
mock_bp.side_effect = create_basic_pitch_midi
|
| 285 |
+
|
| 286 |
+
result = pipeline.transcribe_to_midi(
|
| 287 |
+
audio_path=test_audio_file
|
| 288 |
+
)
|
| 289 |
+
|
| 290 |
+
assert result.exists()
|
| 291 |
+
# Verify fallback to basic-pitch
|
| 292 |
+
assert mock_bp.called
|
| 293 |
+
|
| 294 |
+
|
| 295 |
+
if __name__ == "__main__":
|
| 296 |
+
pytest.main([__file__, "-v"])
|
backend/ymt/amt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
yourmt3_core/amt
|
backend/ymt/yourmt3_core
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
Subproject commit 8a4fbcabdf660f9f06bcb6f12bbf00d9b3139b98
|
backend/yourmt3_wrapper.py
ADDED
|
@@ -0,0 +1,211 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
YourMT3 Transcription Wrapper
|
| 3 |
+
|
| 4 |
+
This module provides a simplified interface to the YourMT3+ model for
|
| 5 |
+
music transcription. It wraps the HuggingFace Spaces implementation
|
| 6 |
+
to provide a clean API for transcription services.
|
| 7 |
+
|
| 8 |
+
Based on: https://huggingface.co/spaces/mimbres/YourMT3
|
| 9 |
+
"""
|
| 10 |
+
|
| 11 |
+
import sys
|
| 12 |
+
import os
|
| 13 |
+
from pathlib import Path
|
| 14 |
+
from typing import Optional
|
| 15 |
+
|
| 16 |
+
# Add paths for imports
|
| 17 |
+
_base_dir = Path(__file__).parent
|
| 18 |
+
sys.path.insert(0, str(_base_dir / "ymt" / "yourmt3_core")) # For model_helper
|
| 19 |
+
sys.path.insert(0, str(_base_dir / "ymt" / "yourmt3_core" / "amt" / "src")) # For model/utils
|
| 20 |
+
|
| 21 |
+
import torch
|
| 22 |
+
import torchaudio
|
| 23 |
+
|
| 24 |
+
class YourMT3Transcriber:
|
| 25 |
+
"""
|
| 26 |
+
Wrapper class for YourMT3+ music transcription model.
|
| 27 |
+
|
| 28 |
+
This class handles model loading and provides a simple transcribe() method
|
| 29 |
+
for converting audio files to MIDI.
|
| 30 |
+
"""
|
| 31 |
+
|
| 32 |
+
def __init__(
|
| 33 |
+
self,
|
| 34 |
+
model_name: str = "YPTF.MoE+Multi (noPS)",
|
| 35 |
+
device: Optional[str] = None,
|
| 36 |
+
checkpoint_dir: Optional[Path] = None
|
| 37 |
+
):
|
| 38 |
+
"""
|
| 39 |
+
Initialize the YourMT3 transcriber.
|
| 40 |
+
|
| 41 |
+
Args:
|
| 42 |
+
model_name: Model variant to use. Options:
|
| 43 |
+
- "YMT3+"
|
| 44 |
+
- "YPTF+Single (noPS)"
|
| 45 |
+
- "YPTF+Multi (PS)"
|
| 46 |
+
- "YPTF.MoE+Multi (noPS)" (default, best quality)
|
| 47 |
+
- "YPTF.MoE+Multi (PS)"
|
| 48 |
+
device: Device to run on ('cuda', 'cpu', or None for auto-detect)
|
| 49 |
+
checkpoint_dir: Directory containing model checkpoints
|
| 50 |
+
"""
|
| 51 |
+
self.model_name = model_name
|
| 52 |
+
self.device = device or ("cuda" if torch.cuda.is_available() else "cpu")
|
| 53 |
+
self.checkpoint_dir = checkpoint_dir or Path(__file__).parent / "ymt" / "yourmt3_core" / "logs" / "2024"
|
| 54 |
+
|
| 55 |
+
print(f"Initializing YourMT3+ ({model_name}) on {self.device}")
|
| 56 |
+
print(f"Checkpoint dir: {self.checkpoint_dir}")
|
| 57 |
+
|
| 58 |
+
# Import after path setup
|
| 59 |
+
try:
|
| 60 |
+
from model_helper import load_model_checkpoint
|
| 61 |
+
self._load_model_checkpoint = load_model_checkpoint
|
| 62 |
+
except ImportError as e:
|
| 63 |
+
raise RuntimeError(
|
| 64 |
+
f"Failed to import YourMT3 model helpers: {e}\n"
|
| 65 |
+
f"Make sure the amt/src directory is properly set up in yourmt3_core/"
|
| 66 |
+
)
|
| 67 |
+
|
| 68 |
+
# Load model
|
| 69 |
+
self.model = self._load_model(model_name)
|
| 70 |
+
|
| 71 |
+
def _get_model_args(self, model_name: str) -> list:
|
| 72 |
+
"""Get command-line arguments for model loading."""
|
| 73 |
+
project = '2024'
|
| 74 |
+
# Use float16 for GPU devices (CUDA/MPS) for better performance and lower memory
|
| 75 |
+
precision = '16' if self.device in ['cuda', 'mps'] else '32'
|
| 76 |
+
|
| 77 |
+
if model_name == "YMT3+":
|
| 78 |
+
checkpoint = "notask_all_cross_v6_xk2_amp0811_gm_ext_plus_nops_b72@model.ckpt"
|
| 79 |
+
args = [checkpoint, '-p', project, '-pr', precision]
|
| 80 |
+
elif model_name == "YPTF+Single (noPS)":
|
| 81 |
+
checkpoint = "ptf_all_cross_rebal5_mirst_xk2_edr005_attend_c_full_plus_b100@model.ckpt"
|
| 82 |
+
args = [checkpoint, '-p', project, '-enc', 'perceiver-tf', '-ac', 'spec',
|
| 83 |
+
'-hop', '300', '-atc', '1', '-pr', precision]
|
| 84 |
+
elif model_name == "YPTF+Multi (PS)":
|
| 85 |
+
checkpoint = "mc13_256_all_cross_v6_xk5_amp0811_edr005_attend_c_full_plus_2psn_nl26_sb_b26r_800k@model.ckpt"
|
| 86 |
+
args = [checkpoint, '-p', project, '-tk', 'mc13_full_plus_256',
|
| 87 |
+
'-dec', 'multi-t5', '-nl', '26', '-enc', 'perceiver-tf',
|
| 88 |
+
'-ac', 'spec', '-hop', '300', '-atc', '1', '-pr', precision]
|
| 89 |
+
elif model_name == "YPTF.MoE+Multi (noPS)":
|
| 90 |
+
checkpoint = "mc13_256_g4_all_v7_mt3f_sqr_rms_moe_wf4_n8k2_silu_rope_rp_b36_nops@last.ckpt"
|
| 91 |
+
args = [checkpoint, '-p', project, '-tk', 'mc13_full_plus_256', '-dec', 'multi-t5',
|
| 92 |
+
'-nl', '26', '-enc', 'perceiver-tf', '-sqr', '1', '-ff', 'moe',
|
| 93 |
+
'-wf', '4', '-nmoe', '8', '-kmoe', '2', '-act', 'silu', '-epe', 'rope',
|
| 94 |
+
'-rp', '1', '-ac', 'spec', '-hop', '300', '-atc', '1', '-pr', precision]
|
| 95 |
+
elif model_name == "YPTF.MoE+Multi (PS)":
|
| 96 |
+
checkpoint = "mc13_256_g4_all_v7_mt3f_sqr_rms_moe_wf4_n8k2_silu_rope_rp_b80_ps2@model.ckpt"
|
| 97 |
+
args = [checkpoint, '-p', project, '-tk', 'mc13_full_plus_256', '-dec', 'multi-t5',
|
| 98 |
+
'-nl', '26', '-enc', 'perceiver-tf', '-sqr', '1', '-ff', 'moe',
|
| 99 |
+
'-wf', '4', '-nmoe', '8', '-kmoe', '2', '-act', 'silu', '-epe', 'rope',
|
| 100 |
+
'-rp', '1', '-ac', 'spec', '-hop', '300', '-atc', '1', '-pr', precision]
|
| 101 |
+
else:
|
| 102 |
+
raise ValueError(f"Unknown model name: {model_name}")
|
| 103 |
+
|
| 104 |
+
return args
|
| 105 |
+
|
| 106 |
+
def _load_model(self, model_name: str):
|
| 107 |
+
"""Load the YourMT3 model checkpoint."""
|
| 108 |
+
args = self._get_model_args(model_name)
|
| 109 |
+
|
| 110 |
+
print(f"Loading model with args: {' '.join(args)}")
|
| 111 |
+
|
| 112 |
+
# YourMT3 expects to be run from amt/src directory for checkpoint paths
|
| 113 |
+
# Save current directory and temporarily change to amt/src
|
| 114 |
+
original_cwd = os.getcwd()
|
| 115 |
+
amt_src_dir = _base_dir / "ymt" / "yourmt3_core" / "amt" / "src"
|
| 116 |
+
|
| 117 |
+
try:
|
| 118 |
+
os.chdir(str(amt_src_dir))
|
| 119 |
+
|
| 120 |
+
# Load on CPU first, then move to target device
|
| 121 |
+
model = self._load_model_checkpoint(args=args, device="cpu")
|
| 122 |
+
model.to(self.device)
|
| 123 |
+
model.eval()
|
| 124 |
+
finally:
|
| 125 |
+
# Always restore original directory
|
| 126 |
+
os.chdir(original_cwd)
|
| 127 |
+
|
| 128 |
+
# Enable optimizations for inference
|
| 129 |
+
if hasattr(torch, 'set_float32_matmul_precision'):
|
| 130 |
+
torch.set_float32_matmul_precision('high') # Use TF32 on Ampere GPUs
|
| 131 |
+
|
| 132 |
+
# Disable gradient computation for inference (reduces memory)
|
| 133 |
+
for param in model.parameters():
|
| 134 |
+
param.requires_grad = False
|
| 135 |
+
|
| 136 |
+
print(f"Model loaded successfully on {self.device}")
|
| 137 |
+
return model
|
| 138 |
+
|
| 139 |
+
def transcribe_audio(self, audio_path: Path, output_dir: Optional[Path] = None) -> Path:
|
| 140 |
+
"""
|
| 141 |
+
Transcribe an audio file to MIDI.
|
| 142 |
+
|
| 143 |
+
Args:
|
| 144 |
+
audio_path: Path to input audio file (WAV, MP3, etc.)
|
| 145 |
+
output_dir: Directory to save MIDI output (default: current directory)
|
| 146 |
+
|
| 147 |
+
Returns:
|
| 148 |
+
Path to the generated MIDI file
|
| 149 |
+
|
| 150 |
+
Raises:
|
| 151 |
+
FileNotFoundError: If audio_path doesn't exist
|
| 152 |
+
RuntimeError: If transcription fails
|
| 153 |
+
"""
|
| 154 |
+
audio_path = Path(audio_path)
|
| 155 |
+
if not audio_path.exists():
|
| 156 |
+
raise FileNotFoundError(f"Audio file not found: {audio_path}")
|
| 157 |
+
|
| 158 |
+
output_dir = Path(output_dir) if output_dir else Path("./")
|
| 159 |
+
output_dir.mkdir(parents=True, exist_ok=True)
|
| 160 |
+
|
| 161 |
+
print(f"Transcribing: {audio_path}")
|
| 162 |
+
|
| 163 |
+
try:
|
| 164 |
+
# Import transcribe function
|
| 165 |
+
from model_helper import transcribe
|
| 166 |
+
|
| 167 |
+
# Prepare audio info dict (as expected by transcribe function)
|
| 168 |
+
audio_info = {
|
| 169 |
+
'filepath': str(audio_path),
|
| 170 |
+
'track_name': audio_path.stem
|
| 171 |
+
}
|
| 172 |
+
|
| 173 |
+
# Run transcription
|
| 174 |
+
midi_path = transcribe(self.model, audio_info)
|
| 175 |
+
midi_path = Path(midi_path)
|
| 176 |
+
|
| 177 |
+
# Move to output directory if needed
|
| 178 |
+
if midi_path.parent != output_dir:
|
| 179 |
+
final_path = output_dir / midi_path.name
|
| 180 |
+
midi_path.rename(final_path)
|
| 181 |
+
midi_path = final_path
|
| 182 |
+
|
| 183 |
+
print(f"Transcription complete: {midi_path}")
|
| 184 |
+
return midi_path
|
| 185 |
+
|
| 186 |
+
except Exception as e:
|
| 187 |
+
raise RuntimeError(f"Transcription failed: {e}")
|
| 188 |
+
|
| 189 |
+
|
| 190 |
+
if __name__ == "__main__":
|
| 191 |
+
# Test the transcriber
|
| 192 |
+
import argparse
|
| 193 |
+
|
| 194 |
+
parser = argparse.ArgumentParser(description="Test YourMT3 Transcriber")
|
| 195 |
+
parser.add_argument("audio_file", type=str, help="Path to audio file")
|
| 196 |
+
parser.add_argument("--model", type=str, default="YPTF.MoE+Multi (noPS)",
|
| 197 |
+
help="Model variant to use")
|
| 198 |
+
parser.add_argument("--output", type=str, default="./output",
|
| 199 |
+
help="Output directory for MIDI files")
|
| 200 |
+
args = parser.parse_args()
|
| 201 |
+
|
| 202 |
+
# Initialize transcriber
|
| 203 |
+
transcriber = YourMT3Transcriber(model_name=args.model)
|
| 204 |
+
|
| 205 |
+
# Transcribe audio
|
| 206 |
+
midi_path = transcriber.transcribe_audio(
|
| 207 |
+
audio_path=Path(args.audio_file),
|
| 208 |
+
output_dir=Path(args.output)
|
| 209 |
+
)
|
| 210 |
+
|
| 211 |
+
print(f"MIDI saved to: {midi_path}")
|
docker-compose.yml
CHANGED
|
@@ -27,6 +27,7 @@ services:
|
|
| 27 |
- API_HOST=0.0.0.0
|
| 28 |
- API_PORT=8000
|
| 29 |
- CORS_ORIGINS=http://localhost:5173,http://localhost:3000
|
|
|
|
| 30 |
volumes:
|
| 31 |
- ./backend:/app
|
| 32 |
- ./storage:/app/storage
|
|
|
|
| 27 |
- API_HOST=0.0.0.0
|
| 28 |
- API_PORT=8000
|
| 29 |
- CORS_ORIGINS=http://localhost:5173,http://localhost:3000
|
| 30 |
+
- YOURMT3_DEVICE=cpu
|
| 31 |
volumes:
|
| 32 |
- ./backend:/app
|
| 33 |
- ./storage:/app/storage
|
docs/architecture/tech-stack.md
CHANGED
|
@@ -176,27 +176,36 @@ This document details the technology choices for Rescored, including alternative
|
|
| 176 |
|
| 177 |
---
|
| 178 |
|
| 179 |
-
### Transcription: basic-pitch
|
| 180 |
|
| 181 |
-
**Chosen**: basic-pitch (Spotify)
|
| 182 |
|
| 183 |
-
**Why**:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 184 |
- Polyphonic transcription (multiple notes at once)
|
| 185 |
-
-
|
| 186 |
-
-
|
| 187 |
-
-
|
| 188 |
-
-
|
| 189 |
|
| 190 |
**Alternatives Considered**:
|
| 191 |
|
| 192 |
| Option | Pros | Cons | Why Not Chosen |
|
| 193 |
|--------|------|------|----------------|
|
| 194 |
-
| MT3 (Music Transformer) | Google's latest, multi-instrument aware | Slower, larger model, harder to run |
|
| 195 |
-
| Omnizart | Multi-instrument, good documentation |
|
| 196 |
| Tony (pYIN) | Excellent for monophonic | Only monophonic | Need polyphonic support |
|
| 197 |
-
| commercial APIs | Better quality | Expensive, privacy | Local processing preferred |
|
| 198 |
|
| 199 |
-
**Decision**:
|
| 200 |
|
| 201 |
---
|
| 202 |
|
|
|
|
| 176 |
|
| 177 |
---
|
| 178 |
|
| 179 |
+
### Transcription: YourMT3+ (Primary) + basic-pitch (Fallback)
|
| 180 |
|
| 181 |
+
**Chosen**: YourMT3+ (KAIST) with automatic fallback to basic-pitch (Spotify)
|
| 182 |
|
| 183 |
+
**Why YourMT3+**:
|
| 184 |
+
- **80-85% accuracy** vs 70% for basic-pitch
|
| 185 |
+
- State-of-the-art multi-instrument transcription model
|
| 186 |
+
- Mixture of Experts architecture for better quality
|
| 187 |
+
- Perceiver-TF encoder with RoPE position encoding
|
| 188 |
+
- Trained on diverse datasets (30k+ songs, 13 instrument classes)
|
| 189 |
+
- Open-source, actively maintained
|
| 190 |
+
- Optimized for Apple Silicon (MPS) with float16 precision (14x speedup)
|
| 191 |
+
|
| 192 |
+
**Why basic-pitch as Fallback**:
|
| 193 |
- Polyphonic transcription (multiple notes at once)
|
| 194 |
+
- Lighter weight, faster inference
|
| 195 |
+
- Simple setup, no model download required
|
| 196 |
+
- Good baseline quality (70% accuracy)
|
| 197 |
+
- Automatically used if YourMT3+ unavailable
|
| 198 |
|
| 199 |
**Alternatives Considered**:
|
| 200 |
|
| 201 |
| Option | Pros | Cons | Why Not Chosen |
|
| 202 |
|--------|------|------|----------------|
|
| 203 |
+
| MT3 (Music Transformer) | Google's latest, multi-instrument aware | Slower, larger model, harder to run | YourMT3+ more accurate |
|
| 204 |
+
| Omnizart | Multi-instrument, good documentation | Lower accuracy than YourMT3+, slower | Removed in favor of YourMT3+ |
|
| 205 |
| Tony (pYIN) | Excellent for monophonic | Only monophonic | Need polyphonic support |
|
| 206 |
+
| commercial APIs | Better quality | Expensive, privacy concerns | Local processing preferred |
|
| 207 |
|
| 208 |
+
**Decision**: YourMT3+ offers the best accuracy for self-hosted solution with intelligent fallback to basic-pitch for reliability.
|
| 209 |
|
| 210 |
---
|
| 211 |
|
docs/backend/pipeline.md
CHANGED
|
@@ -31,12 +31,15 @@ graph TB
|
|
| 31 |
end
|
| 32 |
|
| 33 |
subgraph Stage3["Stage 3: Transcription (50-90%)"]
|
| 34 |
-
|
| 35 |
-
|
|
|
|
| 36 |
Quantize["Quantize<br/>& Clean<br/>MIDI"]
|
| 37 |
-
MIDI["
|
| 38 |
|
| 39 |
-
|
|
|
|
|
|
|
| 40 |
BasicPitch --> Quantize
|
| 41 |
Quantize --> MIDI
|
| 42 |
end
|
|
@@ -298,83 +301,130 @@ class DemucsProcessor:
|
|
| 298 |
|
| 299 |
## Stage 3: Transcription (Audio β MIDI)
|
| 300 |
|
| 301 |
-
|
| 302 |
|
| 303 |
-
|
| 304 |
|
| 305 |
-
**
|
| 306 |
-
|
| 307 |
-
|
| 308 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 309 |
|
| 310 |
**Implementation**:
|
| 311 |
|
| 312 |
```python
|
| 313 |
-
|
| 314 |
-
from basic_pitch import ICASSP_2022_MODEL_PATH
|
| 315 |
-
import numpy as np
|
| 316 |
from pathlib import Path
|
| 317 |
-
from mido import MidiFile, MidiTrack, Message
|
| 318 |
|
| 319 |
-
class
|
| 320 |
-
def __init__(self):
|
| 321 |
-
|
| 322 |
-
|
| 323 |
|
| 324 |
-
def
|
| 325 |
"""
|
| 326 |
-
Transcribe audio to MIDI using
|
| 327 |
|
| 328 |
Returns:
|
| 329 |
Path to output MIDI file
|
| 330 |
"""
|
| 331 |
-
|
| 332 |
-
|
| 333 |
-
|
| 334 |
-
|
| 335 |
-
|
| 336 |
-
|
| 337 |
-
|
| 338 |
-
|
| 339 |
-
|
| 340 |
-
|
| 341 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 342 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 343 |
|
| 344 |
# Save MIDI
|
| 345 |
-
|
|
|
|
|
|
|
| 346 |
|
| 347 |
-
|
| 348 |
-
cleaned_midi = self.clean_midi(output_path)
|
| 349 |
|
| 350 |
-
|
|
|
|
|
|
|
|
|
|
| 351 |
|
| 352 |
-
|
| 353 |
-
|
| 354 |
-
|
| 355 |
-
|
| 356 |
-
|
| 357 |
-
|
| 358 |
-
|
| 359 |
-
|
| 360 |
-
|
| 361 |
-
|
| 362 |
-
|
| 363 |
-
for msg in track:
|
| 364 |
-
time += msg.time
|
| 365 |
-
if msg.type in ['note_on', 'note_off']:
|
| 366 |
-
# Quantize timing to nearest 16th
|
| 367 |
-
quantized_time = round(time / ticks_per_16th) * ticks_per_16th
|
| 368 |
-
msg.time = quantized_time - time
|
| 369 |
-
time = quantized_time
|
| 370 |
-
|
| 371 |
-
# Save cleaned MIDI
|
| 372 |
-
cleaned_path = midi_path.with_stem(f"{midi_path.stem}_clean")
|
| 373 |
-
mid.save(cleaned_path)
|
| 374 |
-
|
| 375 |
-
return cleaned_path
|
| 376 |
```
|
| 377 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 378 |
**Parameters** (Tempo-Adaptive):
|
| 379 |
- **onset_threshold**: Note onset confidence threshold
|
| 380 |
- Fast tempo (>140 BPM): 0.50 (stricter - fewer false positives)
|
|
@@ -387,9 +437,11 @@ class BasicPitchTranscriber:
|
|
| 387 |
- Slow: More permissive for soft dynamics
|
| 388 |
- **melodia_trick** (True): Improves monophonic melody detection
|
| 389 |
|
| 390 |
-
|
|
|
|
|
|
|
| 391 |
|
| 392 |
-
After basic-pitch generates raw MIDI, several post-processing steps clean up common artifacts:
|
| 393 |
|
| 394 |
1. **clean_midi()** - Filters and quantizes notes
|
| 395 |
- Removes notes outside piano range (A0-C8)
|
|
@@ -399,7 +451,7 @@ After basic-pitch generates raw MIDI, several post-processing steps clean up com
|
|
| 399 |
|
| 400 |
2. **merge_consecutive_notes()** - Fixes choppy sustained phrases
|
| 401 |
- Merges notes of same pitch with small gaps (<150ms default)
|
| 402 |
-
- Addresses
|
| 403 |
|
| 404 |
3. **analyze_note_envelope_and_merge_sustains()** - **NEW: Removes ghost notes**
|
| 405 |
- Detects false onsets from sustained note decay
|
|
|
|
| 31 |
end
|
| 32 |
|
| 33 |
subgraph Stage3["Stage 3: Transcription (50-90%)"]
|
| 34 |
+
Health["Check<br/>YourMT3+<br/>Health"]
|
| 35 |
+
YMT3["YourMT3+<br/>Inference<br/>(Primary)"]
|
| 36 |
+
BasicPitch["basic-pitch<br/>Inference<br/>(Fallback)"]
|
| 37 |
Quantize["Quantize<br/>& Clean<br/>MIDI"]
|
| 38 |
+
MIDI["piano.mid"]
|
| 39 |
|
| 40 |
+
Health -->|Healthy| YMT3
|
| 41 |
+
Health -->|Unavailable| BasicPitch
|
| 42 |
+
YMT3 --> Quantize
|
| 43 |
BasicPitch --> Quantize
|
| 44 |
Quantize --> MIDI
|
| 45 |
end
|
|
|
|
| 301 |
|
| 302 |
## Stage 3: Transcription (Audio β MIDI)
|
| 303 |
|
| 304 |
+
**Current System**: YourMT3+ (Primary) with automatic fallback to basic-pitch
|
| 305 |
|
| 306 |
+
### 3.1 YourMT3+ Inference (Primary, 80-85% Accuracy)
|
| 307 |
|
| 308 |
+
**Purpose**: Convert audio stem to high-quality MIDI notes using state-of-the-art model.
|
| 309 |
+
|
| 310 |
+
**Why YourMT3+?**
|
| 311 |
+
- **80-85% note accuracy** (vs 70% for basic-pitch)
|
| 312 |
+
- Multi-instrument awareness (13 instrument classes)
|
| 313 |
+
- Better rhythm and onset detection
|
| 314 |
+
- Mixture of Experts architecture for quality
|
| 315 |
+
- Perceiver-TF encoder with RoPE position encoding
|
| 316 |
+
|
| 317 |
+
**Health Check Flow**:
|
| 318 |
+
1. Check YourMT3+ service health at `/api/v1/yourmt3/health`
|
| 319 |
+
2. If healthy and model loaded β Use YourMT3+
|
| 320 |
+
3. If unavailable/unhealthy β Automatic fallback to basic-pitch
|
| 321 |
|
| 322 |
**Implementation**:
|
| 323 |
|
| 324 |
```python
|
| 325 |
+
import requests
|
|
|
|
|
|
|
| 326 |
from pathlib import Path
|
|
|
|
| 327 |
|
| 328 |
+
class TranscriptionPipeline:
|
| 329 |
+
def __init__(self, job_id, youtube_url, storage_path, config):
|
| 330 |
+
self.config = config # Has use_yourmt3_transcription flag
|
| 331 |
+
self.service_url = config.transcription_service_url # http://localhost:8000
|
| 332 |
|
| 333 |
+
def transcribe_to_midi(self, audio_path: Path) -> Path:
|
| 334 |
"""
|
| 335 |
+
Transcribe audio to MIDI using YourMT3+ with automatic fallback.
|
| 336 |
|
| 337 |
Returns:
|
| 338 |
Path to output MIDI file
|
| 339 |
"""
|
| 340 |
+
midi_path = None
|
| 341 |
+
|
| 342 |
+
# Try YourMT3+ first (if enabled)
|
| 343 |
+
if self.config.use_yourmt3_transcription:
|
| 344 |
+
try:
|
| 345 |
+
print("Transcribing with YourMT3+ (primary)...")
|
| 346 |
+
midi_path = self.transcribe_with_yourmt3(audio_path)
|
| 347 |
+
print("β YourMT3+ transcription complete")
|
| 348 |
+
except Exception as e:
|
| 349 |
+
print(f"β YourMT3+ failed: {e}")
|
| 350 |
+
print("β Falling back to basic-pitch")
|
| 351 |
+
midi_path = None
|
| 352 |
+
|
| 353 |
+
# Fallback to basic-pitch if YourMT3+ failed or disabled
|
| 354 |
+
if midi_path is None:
|
| 355 |
+
print("Transcribing with basic-pitch (fallback)...")
|
| 356 |
+
midi_path = self.transcribe_with_basic_pitch(audio_path)
|
| 357 |
+
print("β basic-pitch transcription complete")
|
| 358 |
+
|
| 359 |
+
return midi_path
|
| 360 |
+
|
| 361 |
+
def transcribe_with_yourmt3(self, audio_path: Path) -> Path:
|
| 362 |
+
"""Call YourMT3+ service via HTTP."""
|
| 363 |
+
# Health check
|
| 364 |
+
health_response = requests.get(
|
| 365 |
+
f"{self.service_url}/api/v1/yourmt3/health",
|
| 366 |
+
timeout=5
|
| 367 |
)
|
| 368 |
+
health_data = health_response.json()
|
| 369 |
+
|
| 370 |
+
if not health_data.get("model_loaded"):
|
| 371 |
+
raise RuntimeError("YourMT3+ model not loaded")
|
| 372 |
+
|
| 373 |
+
# Transcribe
|
| 374 |
+
with open(audio_path, 'rb') as f:
|
| 375 |
+
files = {'file': (audio_path.name, f, 'audio/wav')}
|
| 376 |
+
response = requests.post(
|
| 377 |
+
f"{self.service_url}/api/v1/yourmt3/transcribe",
|
| 378 |
+
files=files,
|
| 379 |
+
timeout=self.config.transcription_service_timeout
|
| 380 |
+
)
|
| 381 |
|
| 382 |
# Save MIDI
|
| 383 |
+
midi_path = self.temp_dir / "piano_yourmt3.mid"
|
| 384 |
+
with open(midi_path, 'wb') as f:
|
| 385 |
+
f.write(response.content)
|
| 386 |
|
| 387 |
+
return midi_path
|
|
|
|
| 388 |
|
| 389 |
+
def transcribe_with_basic_pitch(self, audio_path: Path) -> Path:
|
| 390 |
+
"""Fallback transcription using basic-pitch."""
|
| 391 |
+
from basic_pitch.inference import predict_and_save
|
| 392 |
+
from basic_pitch import ICASSP_2022_MODEL_PATH
|
| 393 |
|
| 394 |
+
predict_and_save(
|
| 395 |
+
audio_path_list=[str(audio_path)],
|
| 396 |
+
output_directory=str(self.temp_dir),
|
| 397 |
+
save_midi=True,
|
| 398 |
+
model_or_model_path=ICASSP_2022_MODEL_PATH,
|
| 399 |
+
onset_threshold=0.3,
|
| 400 |
+
frame_threshold=0.3,
|
| 401 |
+
)
|
| 402 |
+
|
| 403 |
+
generated_midi = self.temp_dir / f"{audio_path.stem}_basic_pitch.mid"
|
| 404 |
+
return generated_midi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 405 |
```
|
| 406 |
|
| 407 |
+
**YourMT3+ Features**:
|
| 408 |
+
- Integrated into main backend (port 8000)
|
| 409 |
+
- Model loaded on startup (reduces per-request latency)
|
| 410 |
+
- Float16 precision for MPS (14x speedup on Apple Silicon)
|
| 411 |
+
- ~30-40s processing time for 3.5min audio
|
| 412 |
+
- Automatic health monitoring
|
| 413 |
+
|
| 414 |
+
---
|
| 415 |
+
|
| 416 |
+
### 3.2 basic-pitch Inference (Fallback, 70% Accuracy)
|
| 417 |
+
|
| 418 |
+
**Purpose**: Lightweight fallback transcription when YourMT3+ unavailable.
|
| 419 |
+
|
| 420 |
+
**When Used**:
|
| 421 |
+
- YourMT3+ service health check fails
|
| 422 |
+
- YourMT3+ model not loaded
|
| 423 |
+
- YourMT3+ request times out
|
| 424 |
+
- `use_yourmt3_transcription=False` in config
|
| 425 |
+
|
| 426 |
+
**Implementation**: See `transcribe_with_basic_pitch()` above
|
| 427 |
+
|
| 428 |
**Parameters** (Tempo-Adaptive):
|
| 429 |
- **onset_threshold**: Note onset confidence threshold
|
| 430 |
- Fast tempo (>140 BPM): 0.50 (stricter - fewer false positives)
|
|
|
|
| 437 |
- Slow: More permissive for soft dynamics
|
| 438 |
- **melodia_trick** (True): Improves monophonic melody detection
|
| 439 |
|
| 440 |
+
---
|
| 441 |
+
|
| 442 |
+
### 3.3 Post-Processing Pipeline
|
| 443 |
|
| 444 |
+
After either YourMT3+ or basic-pitch generates raw MIDI, several post-processing steps clean up common artifacts:
|
| 445 |
|
| 446 |
1. **clean_midi()** - Filters and quantizes notes
|
| 447 |
- Removes notes outside piano range (A0-C8)
|
|
|
|
| 451 |
|
| 452 |
2. **merge_consecutive_notes()** - Fixes choppy sustained phrases
|
| 453 |
- Merges notes of same pitch with small gaps (<150ms default)
|
| 454 |
+
- Addresses transcription models' tendency to split sustained notes
|
| 455 |
|
| 456 |
3. **analyze_note_envelope_and_merge_sustains()** - **NEW: Removes ghost notes**
|
| 457 |
- Detects false onsets from sustained note decay
|
docs/getting-started.md
CHANGED
|
@@ -49,7 +49,7 @@ This documentation focuses on **high-level architecture and design decisions**,
|
|
| 49 |
2. [Audio Processing Pipeline](backend/pipeline.md) - Detailed workflow
|
| 50 |
3. [Background Workers](backend/workers.md) - Celery setup
|
| 51 |
4. [API Design](backend/api.md) - REST + WebSocket endpoints
|
| 52 |
-
5. [ML Model Selection](research/ml-models.md) - Demucs
|
| 53 |
6. [Challenges](research/challenges.md) - Known limitations
|
| 54 |
|
| 55 |
**Key Files to Create**:
|
|
@@ -107,9 +107,10 @@ This documentation focuses on **high-level architecture and design decisions**,
|
|
| 107 |
4. [ML Model Selection](research/ml-models.md) - Accuracy expectations
|
| 108 |
|
| 109 |
**Key Insights**:
|
| 110 |
-
- Transcription is ~
|
|
|
|
| 111 |
- Processing takes 1-2 minutes (GPU) or 10-15 minutes (CPU)
|
| 112 |
-
-
|
| 113 |
- MVP focuses on piano only, multi-instrument in Phase 2
|
| 114 |
|
| 115 |
---
|
|
@@ -132,7 +133,7 @@ See [Glossary](glossary.md) for more terms.
|
|
| 132 |
|
| 133 |
**Frontend**: React + VexFlow (notation) + Tone.js (playback)
|
| 134 |
**Backend**: Python/FastAPI + Celery (workers) + Redis (queue)
|
| 135 |
-
**ML**: Demucs (source separation) + basic-pitch (
|
| 136 |
**Formats**: MusicXML (primary), MIDI (intermediate)
|
| 137 |
|
| 138 |
---
|
|
@@ -198,7 +199,7 @@ docker-compose up
|
|
| 198 |
|
| 199 |
### Q: How accurate is transcription?
|
| 200 |
|
| 201 |
-
**A**:
|
| 202 |
|
| 203 |
### Q: Can I deploy this to production?
|
| 204 |
|
|
|
|
| 49 |
2. [Audio Processing Pipeline](backend/pipeline.md) - Detailed workflow
|
| 50 |
3. [Background Workers](backend/workers.md) - Celery setup
|
| 51 |
4. [API Design](backend/api.md) - REST + WebSocket endpoints
|
| 52 |
+
5. [ML Model Selection](research/ml-models.md) - Demucs, YourMT3+, basic-pitch
|
| 53 |
6. [Challenges](research/challenges.md) - Known limitations
|
| 54 |
|
| 55 |
**Key Files to Create**:
|
|
|
|
| 107 |
4. [ML Model Selection](research/ml-models.md) - Accuracy expectations
|
| 108 |
|
| 109 |
**Key Insights**:
|
| 110 |
+
- Transcription is ~80-85% accurate with YourMT3+, ~70% with basic-pitch fallback
|
| 111 |
+
- Users **must** edit output - editor is **critical**
|
| 112 |
- Processing takes 1-2 minutes (GPU) or 10-15 minutes (CPU)
|
| 113 |
+
- YourMT3+ optimized for Apple Silicon (MPS) with 14x speedup via float16
|
| 114 |
- MVP focuses on piano only, multi-instrument in Phase 2
|
| 115 |
|
| 116 |
---
|
|
|
|
| 133 |
|
| 134 |
**Frontend**: React + VexFlow (notation) + Tone.js (playback)
|
| 135 |
**Backend**: Python/FastAPI + Celery (workers) + Redis (queue)
|
| 136 |
+
**ML**: Demucs (source separation) + YourMT3+ (primary transcription, 80-85% accuracy) + basic-pitch (fallback, 70% accuracy)
|
| 137 |
**Formats**: MusicXML (primary), MIDI (intermediate)
|
| 138 |
|
| 139 |
---
|
|
|
|
| 199 |
|
| 200 |
### Q: How accurate is transcription?
|
| 201 |
|
| 202 |
+
**A**: 80-85% for simple piano with YourMT3+ (70-75% for complex music). Falls back to basic-pitch (70% simple, 60-70% complex) if YourMT3+ unavailable. See [ML Models](research/ml-models.md) and [Challenges](research/challenges.md).
|
| 203 |
|
| 204 |
### Q: Can I deploy this to production?
|
| 205 |
|
docs/research/ml-models.md
CHANGED
|
@@ -84,12 +84,46 @@
|
|
| 84 |
|
| 85 |
## Transcription Models
|
| 86 |
|
| 87 |
-
###
|
| 88 |
|
| 89 |
-
**Developer**:
|
| 90 |
-
**License**: Apache 2.0
|
| 91 |
-
**Model Size**: ~
|
| 92 |
-
**Performance**:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
|
| 94 |
**Pros**:
|
| 95 |
- Handles polyphonic music (multiple simultaneous notes)
|
|
@@ -97,55 +131,42 @@
|
|
| 97 |
- Outputs MIDI with velocities
|
| 98 |
- Fast (~5-10s per stem)
|
| 99 |
- Active maintenance
|
|
|
|
| 100 |
|
| 101 |
**Cons**:
|
| 102 |
-
-
|
| 103 |
- Rhythm quantization can be off
|
| 104 |
- Struggles with very dense polyphony
|
| 105 |
|
| 106 |
-
**When to Use**:
|
| 107 |
|
| 108 |
---
|
| 109 |
|
| 110 |
-
### MT3 (Music Transformer) -
|
| 111 |
-
|
| 112 |
-
**Developer**: Google Magenta;
|
| 113 |
-
**License**: Apache 2.0;
|
| 114 |
-
**Model Size**: ~500MB;
|
| 115 |
-
**Performance**: Better than basic-pitch on benchmarks
|
| 116 |
|
| 117 |
-
**
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
**Cons**:
|
| 123 |
-
- Much slower (~30-60s per song)
|
| 124 |
-
- Larger model
|
| 125 |
-
- More complex setup (Transformer architecture)
|
| 126 |
-
- Higher computational requirements
|
| 127 |
|
| 128 |
-
**
|
|
|
|
|
|
|
|
|
|
| 129 |
|
| 130 |
---
|
| 131 |
|
| 132 |
-
### Omnizart
|
| 133 |
|
| 134 |
-
**Developer**: MCTLab (Taiwan)
|
| 135 |
-
**License**: MIT
|
| 136 |
-
**
|
| 137 |
-
|
| 138 |
-
**Pros**:
|
| 139 |
-
- Separate models for piano, guitar, drums, vocals
|
| 140 |
-
- Good single-instrument accuracy
|
| 141 |
-
- Academic backing
|
| 142 |
|
| 143 |
-
**
|
| 144 |
-
-
|
| 145 |
-
-
|
| 146 |
- Less active development
|
| 147 |
-
|
| 148 |
-
**When to Use**: If targeting specific instruments only
|
| 149 |
|
| 150 |
---
|
| 151 |
|
|
@@ -169,58 +190,58 @@
|
|
| 169 |
|
| 170 |
### Comparison
|
| 171 |
|
| 172 |
-
| Model | Polyphonic | Speed (GPU) | Accuracy |
|
| 173 |
-
|-------|-----------|-------------|----------|--------
|
| 174 |
-
|
|
| 175 |
-
|
|
| 176 |
-
|
|
|
|
|
| 177 |
| Tony | No | 2-5s | 90%+ | Vocals only |
|
| 178 |
|
| 179 |
-
**Decision**:
|
| 180 |
|
| 181 |
---
|
| 182 |
|
| 183 |
## Model Accuracy Expectations
|
| 184 |
|
| 185 |
-
### Realistic Transcription Accuracy
|
| 186 |
|
| 187 |
**Simple Piano Melody** (Twinkle Twinkle):
|
| 188 |
-
- Note accuracy: 90-95%
|
| 189 |
-
- Rhythm accuracy:
|
| 190 |
|
| 191 |
**Classical Piano** (Chopin Nocturne):
|
| 192 |
-
- Note accuracy:
|
| 193 |
-
- Rhythm accuracy:
|
| 194 |
|
| 195 |
**Jazz Piano** (Bill Evans):
|
| 196 |
-
- Note accuracy:
|
| 197 |
-
- Rhythm accuracy:
|
| 198 |
|
| 199 |
**Rock/Pop with Band**:
|
| 200 |
-
- Piano separation: 70-80% (depends on
|
| 201 |
-
- Note accuracy:
|
| 202 |
|
| 203 |
-
**Key Insight**:
|
| 204 |
|
| 205 |
---
|
| 206 |
|
| 207 |
## Future Model Improvements
|
| 208 |
|
| 209 |
-
### Fine-Tuning
|
| 210 |
|
| 211 |
-
Train
|
| 212 |
-
- Collect 1000+ piano YouTube videos
|
| 213 |
-
-
|
| 214 |
-
-
|
| 215 |
-
-
|
| 216 |
|
| 217 |
-
### Ensemble Models
|
| 218 |
|
| 219 |
-
|
| 220 |
-
-
|
| 221 |
-
-
|
| 222 |
-
-
|
| 223 |
-
- Cost: 2-3x processing time
|
| 224 |
|
| 225 |
### Post-Processing
|
| 226 |
|
|
|
|
| 84 |
|
| 85 |
## Transcription Models
|
| 86 |
|
| 87 |
+
### YourMT3+ (Primary)
|
| 88 |
|
| 89 |
+
**Developer**: KAIST (Korea Advanced Institute of Science and Technology)
|
| 90 |
+
**License**: Apache 2.0
|
| 91 |
+
**Model Size**: ~536MB (YPTF.MoE+Multi checkpoint)
|
| 92 |
+
**Performance**: **State-of-the-art** multi-instrument transcription
|
| 93 |
+
|
| 94 |
+
**Architecture**:
|
| 95 |
+
- Perceiver-TF encoder with Rotary Position Embeddings (RoPE)
|
| 96 |
+
- Mixture of Experts (MoE) feedforward layers (8 experts, top-2)
|
| 97 |
+
- Multi-channel T5 decoder for 13 instrument classes
|
| 98 |
+
- Float16 precision for GPU optimization
|
| 99 |
+
|
| 100 |
+
**Pros**:
|
| 101 |
+
- **80-85% note accuracy** (vs 70% for basic-pitch)
|
| 102 |
+
- Multi-instrument aware (13 instrument classes)
|
| 103 |
+
- Handles complex polyphony
|
| 104 |
+
- Active development (2024)
|
| 105 |
+
- Open-source, well-documented
|
| 106 |
+
- Optimized for Apple Silicon MPS (14x speedup with float16)
|
| 107 |
+
- Good rhythm and onset detection
|
| 108 |
+
|
| 109 |
+
**Cons**:
|
| 110 |
+
- Large model size (~536MB download)
|
| 111 |
+
- Requires additional setup (model checkpoint download)
|
| 112 |
+
- Slower than basic-pitch (~30-40s per song on GPU)
|
| 113 |
+
- Higher memory requirements (~1.1GB VRAM)
|
| 114 |
+
|
| 115 |
+
**When to Use**: **Production (primary transcriber)** - Best quality for self-hosted solution
|
| 116 |
+
|
| 117 |
+
**Current Status**: Integrated into main backend, enabled by default with automatic fallback
|
| 118 |
+
|
| 119 |
+
---
|
| 120 |
+
|
| 121 |
+
### basic-pitch (Fallback)
|
| 122 |
+
|
| 123 |
+
**Developer**: Spotify
|
| 124 |
+
**License**: Apache 2.0
|
| 125 |
+
**Model Size**: ~30MB
|
| 126 |
+
**Performance**: Good polyphonic transcription (70% accuracy)
|
| 127 |
|
| 128 |
**Pros**:
|
| 129 |
- Handles polyphonic music (multiple simultaneous notes)
|
|
|
|
| 131 |
- Outputs MIDI with velocities
|
| 132 |
- Fast (~5-10s per stem)
|
| 133 |
- Active maintenance
|
| 134 |
+
- Lightweight, no setup required
|
| 135 |
|
| 136 |
**Cons**:
|
| 137 |
+
- Lower accuracy than YourMT3+ (~70% vs 80-85%)
|
| 138 |
- Rhythm quantization can be off
|
| 139 |
- Struggles with very dense polyphony
|
| 140 |
|
| 141 |
+
**When to Use**: **Automatic fallback** when YourMT3+ unavailable or disabled
|
| 142 |
|
| 143 |
---
|
| 144 |
|
| 145 |
+
### MT3 (Music Transformer) - Not Used
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 146 |
|
| 147 |
+
**Developer**: Google Magenta
|
| 148 |
+
**License**: Apache 2.0
|
| 149 |
+
**Model Size**: ~500MB
|
| 150 |
+
**Performance**: Good, but surpassed by YourMT3+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 151 |
|
| 152 |
+
**Why Not Chosen**:
|
| 153 |
+
- YourMT3+ offers better accuracy
|
| 154 |
+
- Similar computational requirements
|
| 155 |
+
- YourMT3+ has better documentation and setup
|
| 156 |
|
| 157 |
---
|
| 158 |
|
| 159 |
+
### Omnizart - Removed
|
| 160 |
|
| 161 |
+
**Developer**: MCTLab (Taiwan)
|
| 162 |
+
**License**: MIT
|
| 163 |
+
**Status**: **Removed from codebase** (replaced by YourMT3+)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 164 |
|
| 165 |
+
**Why Removed**:
|
| 166 |
+
- Lower accuracy than YourMT3+ (75-80% vs 80-85%)
|
| 167 |
+
- More complex setup with multiple models
|
| 168 |
- Less active development
|
| 169 |
+
- Dual-transcription merging added complexity without accuracy gains
|
|
|
|
| 170 |
|
| 171 |
---
|
| 172 |
|
|
|
|
| 190 |
|
| 191 |
### Comparison
|
| 192 |
|
| 193 |
+
| Model | Polyphonic | Speed (GPU) | Accuracy | Status |
|
| 194 |
+
|-------|-----------|-------------|----------|--------|
|
| 195 |
+
| **YourMT3+** | Yes | 30-40s | **80-85%** | **Primary (Production)** |
|
| 196 |
+
| basic-pitch | Yes | 5-10s | 70% | Fallback |
|
| 197 |
+
| MT3 | Yes | 30-60s | 75-80% | Not used |
|
| 198 |
+
| Omnizart | Yes | 15-30s | 75-80% | Removed |
|
| 199 |
| Tony | No | 2-5s | 90%+ | Vocals only |
|
| 200 |
|
| 201 |
+
**Decision**: YourMT3+ as primary transcriber with automatic fallback to basic-pitch for reliability.
|
| 202 |
|
| 203 |
---
|
| 204 |
|
| 205 |
## Model Accuracy Expectations
|
| 206 |
|
| 207 |
+
### Realistic Transcription Accuracy (with YourMT3+)
|
| 208 |
|
| 209 |
**Simple Piano Melody** (Twinkle Twinkle):
|
| 210 |
+
- Note accuracy: **90-95%** (YourMT3+) / 85-90% (basic-pitch)
|
| 211 |
+
- Rhythm accuracy: **85-90%** (YourMT3+) / 75-80% (basic-pitch)
|
| 212 |
|
| 213 |
**Classical Piano** (Chopin Nocturne):
|
| 214 |
+
- Note accuracy: **75-85%** (YourMT3+) / 65-75% (basic-pitch)
|
| 215 |
+
- Rhythm accuracy: **70-75%** (YourMT3+) / 55-65% (basic-pitch)
|
| 216 |
|
| 217 |
**Jazz Piano** (Bill Evans):
|
| 218 |
+
- Note accuracy: **70-75%** (YourMT3+) / 55-65% (basic-pitch)
|
| 219 |
+
- Rhythm accuracy: **60-70%** (YourMT3+) / 45-55% (basic-pitch)
|
| 220 |
|
| 221 |
**Rock/Pop with Band**:
|
| 222 |
+
- Piano separation: 70-80% (depends on Demucs quality)
|
| 223 |
+
- Note accuracy: **70-75%** (YourMT3+) / 55-65% (basic-pitch)
|
| 224 |
|
| 225 |
+
**Key Insight**: YourMT3+ provides 10-15% better accuracy than basic-pitch, but transcription still won't be perfect. Editor is **critical** for users to fix errors.
|
| 226 |
|
| 227 |
---
|
| 228 |
|
| 229 |
## Future Model Improvements
|
| 230 |
|
| 231 |
+
### Fine-Tuning YourMT3+
|
| 232 |
|
| 233 |
+
Train on piano-specific dataset:
|
| 234 |
+
- Collect 1000+ piano YouTube videos with ground truth
|
| 235 |
+
- Fine-tune YourMT3+ checkpoint on piano-only data
|
| 236 |
+
- Expected improvement: +3-5% accuracy for piano
|
| 237 |
+
- Cost: GPU compute for training
|
| 238 |
|
| 239 |
+
### Ensemble Models (Not Currently Used)
|
| 240 |
|
| 241 |
+
Previously attempted basic-pitch + omnizart merging:
|
| 242 |
+
- **Result**: Removed due to complexity without significant accuracy gain
|
| 243 |
+
- **Learning**: YourMT3+ alone provides better results than merged basic-pitch + omnizart
|
| 244 |
+
- **Future**: Could revisit with YourMT3+ + MT3 ensemble if needed
|
|
|
|
| 245 |
|
| 246 |
### Post-Processing
|
| 247 |
|
docs/testing/backend-testing.md
DELETED
|
@@ -1,520 +0,0 @@
|
|
| 1 |
-
# Backend Testing Guide
|
| 2 |
-
|
| 3 |
-
Comprehensive guide for testing the Rescored backend.
|
| 4 |
-
|
| 5 |
-
## Table of Contents
|
| 6 |
-
|
| 7 |
-
- [Setup](#setup)
|
| 8 |
-
- [Running Tests](#running-tests)
|
| 9 |
-
- [Test Structure](#test-structure)
|
| 10 |
-
- [Writing Tests](#writing-tests)
|
| 11 |
-
- [Testing Patterns](#testing-patterns)
|
| 12 |
-
- [Troubleshooting](#troubleshooting)
|
| 13 |
-
|
| 14 |
-
## Setup
|
| 15 |
-
|
| 16 |
-
### Install Test Dependencies
|
| 17 |
-
|
| 18 |
-
```bash
|
| 19 |
-
cd backend
|
| 20 |
-
pip install -r requirements-test.txt
|
| 21 |
-
```
|
| 22 |
-
|
| 23 |
-
This installs:
|
| 24 |
-
- `pytest`: Test framework
|
| 25 |
-
- `pytest-asyncio`: Async test support
|
| 26 |
-
- `pytest-cov`: Coverage reporting
|
| 27 |
-
- `pytest-mock`: Enhanced mocking
|
| 28 |
-
- `httpx`: HTTP testing client
|
| 29 |
-
|
| 30 |
-
### Configuration
|
| 31 |
-
|
| 32 |
-
Test configuration is in `pytest.ini`:
|
| 33 |
-
|
| 34 |
-
```ini
|
| 35 |
-
[pytest]
|
| 36 |
-
testpaths = tests
|
| 37 |
-
markers =
|
| 38 |
-
unit: Unit tests
|
| 39 |
-
integration: Integration tests
|
| 40 |
-
slow: Slow-running tests
|
| 41 |
-
gpu: Tests requiring GPU
|
| 42 |
-
network: Tests requiring network
|
| 43 |
-
```
|
| 44 |
-
|
| 45 |
-
## Running Tests
|
| 46 |
-
|
| 47 |
-
### Basic Commands
|
| 48 |
-
|
| 49 |
-
```bash
|
| 50 |
-
# Run all tests
|
| 51 |
-
pytest
|
| 52 |
-
|
| 53 |
-
# Run with coverage
|
| 54 |
-
pytest --cov
|
| 55 |
-
|
| 56 |
-
# Run specific file
|
| 57 |
-
pytest tests/test_utils.py
|
| 58 |
-
|
| 59 |
-
# Run specific test
|
| 60 |
-
pytest tests/test_utils.py::TestValidateYouTubeURL::test_valid_watch_url
|
| 61 |
-
|
| 62 |
-
# Run by marker
|
| 63 |
-
pytest -m unit
|
| 64 |
-
pytest -m "unit and not slow"
|
| 65 |
-
```
|
| 66 |
-
|
| 67 |
-
### Watch Mode
|
| 68 |
-
|
| 69 |
-
Use `pytest-watch` for continuous testing:
|
| 70 |
-
|
| 71 |
-
```bash
|
| 72 |
-
pip install pytest-watch
|
| 73 |
-
ptw # Runs tests on file changes
|
| 74 |
-
```
|
| 75 |
-
|
| 76 |
-
### Coverage Reports
|
| 77 |
-
|
| 78 |
-
```bash
|
| 79 |
-
# Terminal report
|
| 80 |
-
pytest --cov --cov-report=term-missing
|
| 81 |
-
|
| 82 |
-
# HTML report
|
| 83 |
-
pytest --cov --cov-report=html
|
| 84 |
-
open htmlcov/index.html
|
| 85 |
-
|
| 86 |
-
# Both
|
| 87 |
-
pytest --cov --cov-report=term-missing --cov-report=html
|
| 88 |
-
```
|
| 89 |
-
|
| 90 |
-
## Test Structure
|
| 91 |
-
|
| 92 |
-
### Test Files
|
| 93 |
-
|
| 94 |
-
Each module has a corresponding test file:
|
| 95 |
-
|
| 96 |
-
- `utils.py` β `tests/test_utils.py`
|
| 97 |
-
- `pipeline.py` β `tests/test_pipeline.py`
|
| 98 |
-
- `main.py` β `tests/test_api.py`
|
| 99 |
-
- `tasks.py` β `tests/test_tasks.py`
|
| 100 |
-
|
| 101 |
-
### Test Organization
|
| 102 |
-
|
| 103 |
-
Group related tests in classes:
|
| 104 |
-
|
| 105 |
-
```python
|
| 106 |
-
class TestValidateYouTubeURL:
|
| 107 |
-
"""Test YouTube URL validation."""
|
| 108 |
-
|
| 109 |
-
def test_valid_watch_url(self):
|
| 110 |
-
"""Test standard youtube.com/watch URL."""
|
| 111 |
-
is_valid, video_id = validate_youtube_url("https://www.youtube.com/watch?v=...")
|
| 112 |
-
assert is_valid is True
|
| 113 |
-
assert video_id == "..."
|
| 114 |
-
|
| 115 |
-
def test_invalid_domain(self):
|
| 116 |
-
"""Test URL from wrong domain."""
|
| 117 |
-
is_valid, error = validate_youtube_url("https://vimeo.com/12345")
|
| 118 |
-
assert is_valid is False
|
| 119 |
-
```
|
| 120 |
-
|
| 121 |
-
## Writing Tests
|
| 122 |
-
|
| 123 |
-
### Basic Test Template
|
| 124 |
-
|
| 125 |
-
```python
|
| 126 |
-
import pytest
|
| 127 |
-
from module_name import function_to_test
|
| 128 |
-
|
| 129 |
-
class TestFunctionName:
|
| 130 |
-
"""Test suite for function_name."""
|
| 131 |
-
|
| 132 |
-
def test_happy_path(self):
|
| 133 |
-
"""Test normal successful execution."""
|
| 134 |
-
result = function_to_test(valid_input)
|
| 135 |
-
assert result == expected_output
|
| 136 |
-
|
| 137 |
-
def test_edge_case(self):
|
| 138 |
-
"""Test boundary condition."""
|
| 139 |
-
result = function_to_test(edge_case_input)
|
| 140 |
-
assert result == expected_edge_output
|
| 141 |
-
|
| 142 |
-
def test_error_handling(self):
|
| 143 |
-
"""Test error is raised for invalid input."""
|
| 144 |
-
with pytest.raises(ValueError) as exc_info:
|
| 145 |
-
function_to_test(invalid_input)
|
| 146 |
-
assert "expected error message" in str(exc_info.value)
|
| 147 |
-
```
|
| 148 |
-
|
| 149 |
-
### Using Fixtures
|
| 150 |
-
|
| 151 |
-
Fixtures provide reusable test data:
|
| 152 |
-
|
| 153 |
-
```python
|
| 154 |
-
@pytest.fixture
|
| 155 |
-
def sample_audio_file(temp_storage_dir):
|
| 156 |
-
"""Create a sample WAV file for testing."""
|
| 157 |
-
import numpy as np
|
| 158 |
-
import soundfile as sf
|
| 159 |
-
|
| 160 |
-
sample_rate = 44100
|
| 161 |
-
duration = 1.0
|
| 162 |
-
samples = np.zeros(int(sample_rate * duration), dtype=np.float32)
|
| 163 |
-
|
| 164 |
-
audio_path = temp_storage_dir / "test_audio.wav"
|
| 165 |
-
sf.write(str(audio_path), samples, sample_rate)
|
| 166 |
-
|
| 167 |
-
return audio_path
|
| 168 |
-
|
| 169 |
-
def test_using_fixture(sample_audio_file):
|
| 170 |
-
"""Test that uses the fixture."""
|
| 171 |
-
assert sample_audio_file.exists()
|
| 172 |
-
assert sample_audio_file.suffix == ".wav"
|
| 173 |
-
```
|
| 174 |
-
|
| 175 |
-
### Mocking External Dependencies
|
| 176 |
-
|
| 177 |
-
#### Mock yt-dlp
|
| 178 |
-
|
| 179 |
-
```python
|
| 180 |
-
from unittest.mock import patch, MagicMock
|
| 181 |
-
|
| 182 |
-
@patch('pipeline.yt_dlp.YoutubeDL')
|
| 183 |
-
def test_download_audio(mock_ydl_class, temp_storage_dir):
|
| 184 |
-
"""Test audio download with mocked yt-dlp."""
|
| 185 |
-
mock_ydl = MagicMock()
|
| 186 |
-
mock_ydl_class.return_value.__enter__.return_value = mock_ydl
|
| 187 |
-
|
| 188 |
-
result = download_audio("https://youtube.com/watch?v=...", temp_storage_dir)
|
| 189 |
-
|
| 190 |
-
assert result.exists()
|
| 191 |
-
mock_ydl.download.assert_called_once()
|
| 192 |
-
```
|
| 193 |
-
|
| 194 |
-
#### Mock Redis
|
| 195 |
-
|
| 196 |
-
```python
|
| 197 |
-
@pytest.fixture
|
| 198 |
-
def mock_redis():
|
| 199 |
-
"""Mock Redis client."""
|
| 200 |
-
redis_mock = MagicMock(spec=Redis)
|
| 201 |
-
redis_mock.ping.return_value = True
|
| 202 |
-
redis_mock.hgetall.return_value = {}
|
| 203 |
-
return redis_mock
|
| 204 |
-
|
| 205 |
-
def test_with_redis(mock_redis):
|
| 206 |
-
"""Test function that uses Redis."""
|
| 207 |
-
# Redis is mocked, no real connection needed
|
| 208 |
-
mock_redis.hset("key", "field", "value")
|
| 209 |
-
assert mock_redis.hset.called
|
| 210 |
-
```
|
| 211 |
-
|
| 212 |
-
#### Mock ML Models
|
| 213 |
-
|
| 214 |
-
```python
|
| 215 |
-
@patch('pipeline.basic_pitch.inference.predict')
|
| 216 |
-
def test_transcribe_audio(mock_predict, sample_audio_file, temp_storage_dir):
|
| 217 |
-
"""Test transcription with mocked ML model."""
|
| 218 |
-
# Mock model output
|
| 219 |
-
mock_predict.return_value = (
|
| 220 |
-
np.zeros((100, 88)), # note activations
|
| 221 |
-
np.zeros((100, 88)), # onsets
|
| 222 |
-
np.zeros((100, 1)) # contours
|
| 223 |
-
)
|
| 224 |
-
|
| 225 |
-
result = transcribe_audio(sample_audio_file, temp_storage_dir)
|
| 226 |
-
|
| 227 |
-
assert result.exists()
|
| 228 |
-
assert result.suffix == ".mid"
|
| 229 |
-
```
|
| 230 |
-
|
| 231 |
-
## Testing Patterns
|
| 232 |
-
|
| 233 |
-
### Testing API Endpoints
|
| 234 |
-
|
| 235 |
-
```python
|
| 236 |
-
from fastapi.testclient import TestClient
|
| 237 |
-
|
| 238 |
-
def test_submit_transcription(test_client, mock_redis):
|
| 239 |
-
"""Test transcription submission endpoint."""
|
| 240 |
-
response = test_client.post(
|
| 241 |
-
"/api/v1/transcribe",
|
| 242 |
-
json={"youtube_url": "https://www.youtube.com/watch?v=..."}
|
| 243 |
-
)
|
| 244 |
-
|
| 245 |
-
assert response.status_code == 201
|
| 246 |
-
data = response.json()
|
| 247 |
-
assert "job_id" in data
|
| 248 |
-
assert data["status"] == "queued"
|
| 249 |
-
```
|
| 250 |
-
|
| 251 |
-
### Testing Async Functions
|
| 252 |
-
|
| 253 |
-
```python
|
| 254 |
-
import pytest
|
| 255 |
-
|
| 256 |
-
@pytest.mark.asyncio
|
| 257 |
-
async def test_async_function():
|
| 258 |
-
"""Test async function."""
|
| 259 |
-
result = await async_operation()
|
| 260 |
-
assert result == expected_value
|
| 261 |
-
```
|
| 262 |
-
|
| 263 |
-
### Testing WebSocket Connections
|
| 264 |
-
|
| 265 |
-
```python
|
| 266 |
-
def test_websocket(test_client, sample_job_id):
|
| 267 |
-
"""Test WebSocket connection."""
|
| 268 |
-
with test_client.websocket_connect(f"/api/v1/jobs/{sample_job_id}/stream") as websocket:
|
| 269 |
-
data = websocket.receive_json()
|
| 270 |
-
assert data["type"] == "progress"
|
| 271 |
-
assert "job_id" in data
|
| 272 |
-
```
|
| 273 |
-
|
| 274 |
-
### Testing Error Scenarios
|
| 275 |
-
|
| 276 |
-
```python
|
| 277 |
-
def test_video_too_long(test_client):
|
| 278 |
-
"""Test error handling for videos exceeding duration limit."""
|
| 279 |
-
with patch('utils.check_video_availability') as mock_check:
|
| 280 |
-
mock_check.return_value = {
|
| 281 |
-
'available': False,
|
| 282 |
-
'reason': 'Video too long (max 15 minutes)'
|
| 283 |
-
}
|
| 284 |
-
|
| 285 |
-
response = test_client.post(
|
| 286 |
-
"/api/v1/transcribe",
|
| 287 |
-
json={"youtube_url": "https://www.youtube.com/watch?v=long"}
|
| 288 |
-
)
|
| 289 |
-
|
| 290 |
-
assert response.status_code == 422
|
| 291 |
-
assert "too long" in response.json()["detail"]
|
| 292 |
-
```
|
| 293 |
-
|
| 294 |
-
### Testing Retries
|
| 295 |
-
|
| 296 |
-
```python
|
| 297 |
-
def test_retry_on_network_error():
|
| 298 |
-
"""Test that function retries on network error."""
|
| 299 |
-
mock_func = MagicMock()
|
| 300 |
-
mock_func.side_effect = [
|
| 301 |
-
ConnectionError("Network timeout"), # First call fails
|
| 302 |
-
ConnectionError("Network timeout"), # Second call fails
|
| 303 |
-
{"success": True} # Third call succeeds
|
| 304 |
-
]
|
| 305 |
-
|
| 306 |
-
# Function should retry and eventually succeed
|
| 307 |
-
result = function_with_retry(mock_func)
|
| 308 |
-
assert result == {"success": True}
|
| 309 |
-
assert mock_func.call_count == 3
|
| 310 |
-
```
|
| 311 |
-
|
| 312 |
-
### Parametrized Tests
|
| 313 |
-
|
| 314 |
-
Test multiple inputs efficiently:
|
| 315 |
-
|
| 316 |
-
```python
|
| 317 |
-
@pytest.mark.parametrize("url,expected_valid,expected_id", [
|
| 318 |
-
("https://www.youtube.com/watch?v=dQw4w9WgXcQ", True, "dQw4w9WgXcQ"),
|
| 319 |
-
("https://youtu.be/dQw4w9WgXcQ", True, "dQw4w9WgXcQ"),
|
| 320 |
-
("https://vimeo.com/12345", False, None),
|
| 321 |
-
("not-a-url", False, None),
|
| 322 |
-
])
|
| 323 |
-
def test_url_validation(url, expected_valid, expected_id):
|
| 324 |
-
"""Test URL validation with multiple inputs."""
|
| 325 |
-
is_valid, result = validate_youtube_url(url)
|
| 326 |
-
assert is_valid == expected_valid
|
| 327 |
-
if expected_valid:
|
| 328 |
-
assert result == expected_id
|
| 329 |
-
```
|
| 330 |
-
|
| 331 |
-
## Testing Pipeline Stages
|
| 332 |
-
|
| 333 |
-
### Audio Download
|
| 334 |
-
|
| 335 |
-
```python
|
| 336 |
-
@patch('pipeline.yt_dlp.YoutubeDL')
|
| 337 |
-
def test_download_audio_success(mock_ydl_class, temp_storage_dir):
|
| 338 |
-
"""Test successful audio download."""
|
| 339 |
-
mock_ydl = MagicMock()
|
| 340 |
-
mock_ydl_class.return_value.__enter__.return_value = mock_ydl
|
| 341 |
-
|
| 342 |
-
result = download_audio("https://youtube.com/watch?v=...", temp_storage_dir)
|
| 343 |
-
|
| 344 |
-
assert result.exists()
|
| 345 |
-
assert result.suffix == ".wav"
|
| 346 |
-
```
|
| 347 |
-
|
| 348 |
-
### Source Separation
|
| 349 |
-
|
| 350 |
-
```python
|
| 351 |
-
@patch('pipeline.demucs.separate.main')
|
| 352 |
-
def test_separate_sources(mock_demucs, sample_audio_file, temp_storage_dir):
|
| 353 |
-
"""Test source separation."""
|
| 354 |
-
# Create mock output files
|
| 355 |
-
stems_dir = temp_storage_dir / "htdemucs" / "test_audio"
|
| 356 |
-
stems_dir.mkdir(parents=True)
|
| 357 |
-
for stem in ["drums", "bass", "vocals", "other"]:
|
| 358 |
-
(stems_dir / f"{stem}.wav").touch()
|
| 359 |
-
|
| 360 |
-
result = separate_sources(sample_audio_file, temp_storage_dir)
|
| 361 |
-
|
| 362 |
-
assert all(stem in result for stem in ["drums", "bass", "vocals", "other"])
|
| 363 |
-
assert all(path.exists() for path in result.values())
|
| 364 |
-
```
|
| 365 |
-
|
| 366 |
-
### Transcription
|
| 367 |
-
|
| 368 |
-
```python
|
| 369 |
-
@patch('pipeline.basic_pitch.inference.predict')
|
| 370 |
-
def test_transcribe_audio(mock_predict, sample_audio_file, temp_storage_dir):
|
| 371 |
-
"""Test audio transcription."""
|
| 372 |
-
mock_predict.return_value = (
|
| 373 |
-
np.random.rand(100, 88),
|
| 374 |
-
np.random.rand(100, 88),
|
| 375 |
-
np.random.rand(100, 1)
|
| 376 |
-
)
|
| 377 |
-
|
| 378 |
-
result = transcribe_audio(sample_audio_file, temp_storage_dir)
|
| 379 |
-
|
| 380 |
-
assert result.exists()
|
| 381 |
-
assert result.suffix == ".mid"
|
| 382 |
-
```
|
| 383 |
-
|
| 384 |
-
### MusicXML Generation
|
| 385 |
-
|
| 386 |
-
```python
|
| 387 |
-
@patch('pipeline.music21.converter.parse')
|
| 388 |
-
def test_generate_musicxml(mock_parse, sample_midi_file, temp_storage_dir):
|
| 389 |
-
"""Test MusicXML generation."""
|
| 390 |
-
mock_score = MagicMock()
|
| 391 |
-
mock_parse.return_value = mock_score
|
| 392 |
-
|
| 393 |
-
result = generate_musicxml(sample_midi_file, temp_storage_dir)
|
| 394 |
-
|
| 395 |
-
assert result.exists()
|
| 396 |
-
assert result.suffix == ".musicxml"
|
| 397 |
-
mock_score.write.assert_called_once()
|
| 398 |
-
```
|
| 399 |
-
|
| 400 |
-
## Troubleshooting
|
| 401 |
-
|
| 402 |
-
### Common Issues
|
| 403 |
-
|
| 404 |
-
**Import Errors**
|
| 405 |
-
|
| 406 |
-
```bash
|
| 407 |
-
# Ensure backend directory is in PYTHONPATH
|
| 408 |
-
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
|
| 409 |
-
pytest
|
| 410 |
-
```
|
| 411 |
-
|
| 412 |
-
**Redis Connection Errors**
|
| 413 |
-
|
| 414 |
-
```python
|
| 415 |
-
# Always mock Redis in tests unless testing Redis specifically
|
| 416 |
-
@pytest.fixture(autouse=True)
|
| 417 |
-
def mock_redis():
|
| 418 |
-
with patch('main.redis_client') as mock:
|
| 419 |
-
yield mock
|
| 420 |
-
```
|
| 421 |
-
|
| 422 |
-
**File Permission Errors**
|
| 423 |
-
|
| 424 |
-
```python
|
| 425 |
-
# Always use temp directories
|
| 426 |
-
@pytest.fixture
|
| 427 |
-
def temp_storage_dir():
|
| 428 |
-
temp_dir = tempfile.mkdtemp()
|
| 429 |
-
yield Path(temp_dir)
|
| 430 |
-
shutil.rmtree(temp_dir, ignore_errors=True)
|
| 431 |
-
```
|
| 432 |
-
|
| 433 |
-
**GPU Not Available**
|
| 434 |
-
|
| 435 |
-
```python
|
| 436 |
-
# Mark GPU tests and skip if unavailable
|
| 437 |
-
@pytest.mark.gpu
|
| 438 |
-
@pytest.mark.skipif(not torch.cuda.is_available(), reason="GPU not available")
|
| 439 |
-
def test_gpu_processing():
|
| 440 |
-
...
|
| 441 |
-
```
|
| 442 |
-
|
| 443 |
-
### Debugging Failed Tests
|
| 444 |
-
|
| 445 |
-
```bash
|
| 446 |
-
# Show print statements
|
| 447 |
-
pytest -s
|
| 448 |
-
|
| 449 |
-
# Verbose output
|
| 450 |
-
pytest -vv
|
| 451 |
-
|
| 452 |
-
# Drop into debugger on failure
|
| 453 |
-
pytest --pdb
|
| 454 |
-
|
| 455 |
-
# Run only failed tests
|
| 456 |
-
pytest --lf
|
| 457 |
-
```
|
| 458 |
-
|
| 459 |
-
### Performance Issues
|
| 460 |
-
|
| 461 |
-
```bash
|
| 462 |
-
# Identify slow tests
|
| 463 |
-
pytest --durations=10
|
| 464 |
-
|
| 465 |
-
# Run tests in parallel
|
| 466 |
-
pytest -n auto # Requires pytest-xdist
|
| 467 |
-
```
|
| 468 |
-
|
| 469 |
-
## Best Practices
|
| 470 |
-
|
| 471 |
-
1. **Mock external dependencies**: Don't make real API calls, network requests, or ML inferences
|
| 472 |
-
2. **Use fixtures**: Share common setup code across tests
|
| 473 |
-
3. **Test edge cases**: Empty inputs, None values, boundary conditions
|
| 474 |
-
4. **Clean up resources**: Always clean up temp files, connections
|
| 475 |
-
5. **Keep tests independent**: Tests should not depend on each other
|
| 476 |
-
6. **Write descriptive names**: Test names should explain what they verify
|
| 477 |
-
7. **Test one thing**: Each test should verify one specific behavior
|
| 478 |
-
8. **Use markers**: Tag tests by type (unit, integration, slow, gpu)
|
| 479 |
-
|
| 480 |
-
## Example Test File
|
| 481 |
-
|
| 482 |
-
Complete example showing best practices:
|
| 483 |
-
|
| 484 |
-
```python
|
| 485 |
-
"""Tests for audio processing pipeline."""
|
| 486 |
-
import pytest
|
| 487 |
-
from pathlib import Path
|
| 488 |
-
from unittest.mock import patch, MagicMock
|
| 489 |
-
import numpy as np
|
| 490 |
-
from pipeline import download_audio, separate_sources, transcribe_audio
|
| 491 |
-
|
| 492 |
-
|
| 493 |
-
class TestAudioDownload:
|
| 494 |
-
"""Test audio download stage."""
|
| 495 |
-
|
| 496 |
-
@patch('pipeline.yt_dlp.YoutubeDL')
|
| 497 |
-
def test_success(self, mock_ydl_class, temp_storage_dir):
|
| 498 |
-
"""Test successful audio download."""
|
| 499 |
-
mock_ydl = MagicMock()
|
| 500 |
-
mock_ydl_class.return_value.__enter__.return_value = mock_ydl
|
| 501 |
-
|
| 502 |
-
result = download_audio("https://youtube.com/watch?v=test", temp_storage_dir)
|
| 503 |
-
|
| 504 |
-
assert result.exists()
|
| 505 |
-
assert result.suffix == ".wav"
|
| 506 |
-
mock_ydl.download.assert_called_once()
|
| 507 |
-
|
| 508 |
-
@patch('pipeline.yt_dlp.YoutubeDL')
|
| 509 |
-
def test_network_error(self, mock_ydl_class, temp_storage_dir):
|
| 510 |
-
"""Test handling of network error."""
|
| 511 |
-
import yt_dlp
|
| 512 |
-
mock_ydl = MagicMock()
|
| 513 |
-
mock_ydl.download.side_effect = yt_dlp.utils.DownloadError("Network error")
|
| 514 |
-
mock_ydl_class.return_value.__enter__.return_value = mock_ydl
|
| 515 |
-
|
| 516 |
-
with pytest.raises(Exception) as exc_info:
|
| 517 |
-
download_audio("https://youtube.com/watch?v=test", temp_storage_dir)
|
| 518 |
-
|
| 519 |
-
assert "Network error" in str(exc_info.value)
|
| 520 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
docs/testing/baseline-accuracy.md
DELETED
|
@@ -1,178 +0,0 @@
|
|
| 1 |
-
# Baseline Accuracy Report
|
| 2 |
-
|
| 3 |
-
**Date**: 2024-12-24
|
| 4 |
-
**Pipeline Version**: Phase 1 Complete (MusicXML corruption fixes, MIDI export, rate limiting)
|
| 5 |
-
**Test Suite**: 10 diverse piano videos
|
| 6 |
-
|
| 7 |
-
## Executive Summary
|
| 8 |
-
|
| 9 |
-
This report establishes the baseline transcription accuracy for the Rescored MVP pipeline after Phase 1 improvements.
|
| 10 |
-
|
| 11 |
-
**Initial Test Results** (Before Bug Fixes):
|
| 12 |
-
- Overall Success Rate: **10%** (1/10 videos)
|
| 13 |
-
- Videos Blocked: 3 (YouTube copyright/availability)
|
| 14 |
-
- Code Bugs Found: 6 (all fixed β
)
|
| 15 |
-
- Successful Test: simple_melody (2,588 notes, 122 measures)
|
| 16 |
-
|
| 17 |
-
**Expected After Fixes**:
|
| 18 |
-
- Success Rate: **70-80%** (7-8/10 videos, excluding blocked ones)
|
| 19 |
-
- All code bugs resolved
|
| 20 |
-
- Need to replace 3 blocked videos with alternatives
|
| 21 |
-
|
| 22 |
-
**Key Finding**: Measure timing accuracy is imperfect (78% of measures show duration warnings), but this is expected for ML-based transcription. MusicXML files load successfully in notation software.
|
| 23 |
-
|
| 24 |
-
## Test Videos
|
| 25 |
-
|
| 26 |
-
| ID | Description | Difficulty | Expected Accuracy | URL |
|
| 27 |
-
|----|-------------|------------|-------------------|-----|
|
| 28 |
-
| simple_melody | C major scale practice | Easy | >80% | [Link](https://www.youtube.com/watch?v=TK1Ij_-mank) |
|
| 29 |
-
| twinkle_twinkle | Twinkle Twinkle Little Star | Easy | >75% | [Link](https://www.youtube.com/watch?v=YCZ_d_4ZEqk) |
|
| 30 |
-
| fur_elise | Beethoven - FΓΌr Elise (simplified) | Medium | 60-70% | [Link](https://www.youtube.com/watch?v=_mVW8tgGY_w) |
|
| 31 |
-
| chopin_nocturne | Chopin - Nocturne Op. 9 No. 2 | Hard | 50-60% | [Link](https://www.youtube.com/watch?v=9E6b3swbnWg) |
|
| 32 |
-
| canon_in_d | Pachelbel - Canon in D | Medium | 60-70% | [Link](https://www.youtube.com/watch?v=NlprozGcs80) |
|
| 33 |
-
| river_flows | Yiruma - River Flows in You | Medium | 60-70% | [Link](https://www.youtube.com/watch?v=7maJOI3QMu0) |
|
| 34 |
-
| moonlight_sonata | Beethoven - Moonlight Sonata | Medium | 60-70% | [Link](https://www.youtube.com/watch?v=4Tr0otuiQuU) |
|
| 35 |
-
| jazz_blues | Simple jazz blues piano | Medium | 55-65% | [Link](https://www.youtube.com/watch?v=F3W_alUuFkA) |
|
| 36 |
-
| claire_de_lune | Debussy - Clair de Lune | Hard | 50-60% | [Link](https://www.youtube.com/watch?v=WNcsUNKlAKw) |
|
| 37 |
-
| la_campanella | Liszt - La Campanella | Very Hard | 40-50% | [Link](https://www.youtube.com/watch?v=MD6xMyuZls0) |
|
| 38 |
-
|
| 39 |
-
## Results
|
| 40 |
-
|
| 41 |
-
### Overall Statistics
|
| 42 |
-
|
| 43 |
-
(To be filled after test completion)
|
| 44 |
-
|
| 45 |
-
- **Total Tests**: 10
|
| 46 |
-
- **Successful**: TBD
|
| 47 |
-
- **Failed**: TBD
|
| 48 |
-
- **Success Rate**: TBD%
|
| 49 |
-
|
| 50 |
-
### Per-Video Results
|
| 51 |
-
|
| 52 |
-
#### Easy Difficulty (2 videos)
|
| 53 |
-
|
| 54 |
-
**simple_melody** β
:
|
| 55 |
-
- Status: **SUCCESS**
|
| 56 |
-
- MIDI Notes: 2,588
|
| 57 |
-
- Measures: 122
|
| 58 |
-
- Duration: 245.2 seconds
|
| 59 |
-
- Separation Quality: 99.3% energy in 'other' stem (excellent)
|
| 60 |
-
- Measure Warnings: 95/122 (78%) - typical for ML transcription
|
| 61 |
-
- Issues: None - clean transcription
|
| 62 |
-
|
| 63 |
-
**twinkle_twinkle** β:
|
| 64 |
-
- Status: **BLOCKED**
|
| 65 |
-
- Error: "Video unavailable"
|
| 66 |
-
- Action: Replace with alternative video
|
| 67 |
-
|
| 68 |
-
#### Medium Difficulty (5 videos)
|
| 69 |
-
|
| 70 |
-
**fur_elise** β:
|
| 71 |
-
- Status: **BLOCKED**
|
| 72 |
-
- Error: "Video unavailable"
|
| 73 |
-
- Action: Replace with alternative video
|
| 74 |
-
|
| 75 |
-
**canon_in_d** β β β
:
|
| 76 |
-
- Status: **FIXED**
|
| 77 |
-
- Error: NoneType velocity comparison (Bug #2a)
|
| 78 |
-
- Fix Applied: Safe velocity handling in deduplication
|
| 79 |
-
- Expected: Success on re-run
|
| 80 |
-
|
| 81 |
-
**river_flows** β β β
:
|
| 82 |
-
- Status: **FIXED**
|
| 83 |
-
- Error: NoneType velocity comparison (Bug #2a)
|
| 84 |
-
- Fix Applied: Safe velocity handling
|
| 85 |
-
- Expected: Success on re-run
|
| 86 |
-
|
| 87 |
-
**moonlight_sonata** β β β
:
|
| 88 |
-
- Status: **FIXED**
|
| 89 |
-
- Error: NoneType velocity comparison (Bug #2a)
|
| 90 |
-
- Fix Applied: Safe velocity handling
|
| 91 |
-
- Expected: Success on re-run
|
| 92 |
-
|
| 93 |
-
**jazz_blues** β:
|
| 94 |
-
- Status: **BLOCKED**
|
| 95 |
-
- Error: "Blocked on copyright grounds"
|
| 96 |
-
- Action: Replace with public domain jazz piano
|
| 97 |
-
|
| 98 |
-
#### Hard Difficulty (2 videos)
|
| 99 |
-
|
| 100 |
-
**chopin_nocturne** β β β
:
|
| 101 |
-
- Status: **FIXED**
|
| 102 |
-
- Error: 2048th note duration in measure 129 (Bug #2b)
|
| 103 |
-
- Fix Applied: Increased minimum duration threshold to 128th note
|
| 104 |
-
- Expected: Success on re-run
|
| 105 |
-
|
| 106 |
-
**claire_de_lune** β β β
:
|
| 107 |
-
- Status: **FIXED**
|
| 108 |
-
- Error: 2048th note duration in measure 30 (Bug #2b)
|
| 109 |
-
- Fix Applied: Increased minimum duration threshold
|
| 110 |
-
- Expected: Success on re-run
|
| 111 |
-
|
| 112 |
-
#### Very Hard Difficulty (1 video)
|
| 113 |
-
|
| 114 |
-
**la_campanella** β β β
:
|
| 115 |
-
- Status: **FIXED**
|
| 116 |
-
- Error: NoneType velocity comparison (Bug #2a)
|
| 117 |
-
- Fix Applied: Safe velocity handling
|
| 118 |
-
- Expected: Success on re-run (may have low accuracy due to extreme difficulty)
|
| 119 |
-
|
| 120 |
-
## Common Failure Modes
|
| 121 |
-
|
| 122 |
-
Detailed analysis in [failure-modes.md](failure-modes.md)
|
| 123 |
-
|
| 124 |
-
### 1. Video Availability (30% of failures)
|
| 125 |
-
- YouTube blocking, copyright claims, unavailable videos
|
| 126 |
-
- **Solution**: Replace with Creative Commons alternatives
|
| 127 |
-
|
| 128 |
-
### 2. Code Bugs - All Fixed β
(60% of failures)
|
| 129 |
-
- **Bug 2a**: NoneType velocity comparison (4 videos)
|
| 130 |
-
- Fixed in [pipeline.py:403-409](../../backend/pipeline.py#L403-L409)
|
| 131 |
-
- **Bug 2b**: 2048th note duration errors (2 videos)
|
| 132 |
-
- Fixed in [pipeline.py:465-502](../../backend/pipeline.py#L465-L502)
|
| 133 |
-
|
| 134 |
-
### 3. Measure Timing Accuracy (78% imperfect)
|
| 135 |
-
- Most measures deviate from exact 4.0 beats
|
| 136 |
-
- Range: 0.0 to 7.83 beats (should be 4.0)
|
| 137 |
-
- **Root causes**: basic-pitch timing, duration snapping, polyphony
|
| 138 |
-
- **Impact**: MusicXML loads but rhythms need manual correction
|
| 139 |
-
- **Status**: Expected limitation for ML transcription - Phase 3 will improve
|
| 140 |
-
|
| 141 |
-
## Accuracy by Difficulty
|
| 142 |
-
|
| 143 |
-
| Difficulty | Avg Success Rate | Avg Notes | Avg Measures | Notes |
|
| 144 |
-
|------------|------------------|-----------|--------------|-------|
|
| 145 |
-
| Easy | TBD | TBD | TBD | TBD |
|
| 146 |
-
| Medium | TBD | TBD | TBD | TBD |
|
| 147 |
-
| Hard | TBD | TBD | TBD | TBD |
|
| 148 |
-
| Very Hard | TBD | TBD | TBD | TBD |
|
| 149 |
-
|
| 150 |
-
## Known Limitations
|
| 151 |
-
|
| 152 |
-
Based on Phase 1 implementation:
|
| 153 |
-
|
| 154 |
-
1. **Measure Timing**: Many measures show duration warnings (3.5-6.5 beats instead of exactly 4.0). This is expected due to:
|
| 155 |
-
- basic-pitch not perfectly aligned to beats
|
| 156 |
-
- Duration snapping to nearest valid note values
|
| 157 |
-
- Imperfect tempo detection
|
| 158 |
-
|
| 159 |
-
2. **MusicXML Warnings**: music21 reports some "overfull measures" when parsing. These are handled gracefully but indicate timing imperfections.
|
| 160 |
-
|
| 161 |
-
3. **Single Staff Only**: Grand staff (treble + bass) disabled in Phase 1 due to polyphony issues.
|
| 162 |
-
|
| 163 |
-
4. **Piano Only**: Currently only transcribes "other" stem from Demucs, assuming piano/keyboard content.
|
| 164 |
-
|
| 165 |
-
## Recommendations for Phase 3
|
| 166 |
-
|
| 167 |
-
(To be filled based on failure analysis)
|
| 168 |
-
|
| 169 |
-
1. **Parameter Tuning**: TBD
|
| 170 |
-
2. **Model Improvements**: TBD
|
| 171 |
-
3. **Post-Processing**: TBD
|
| 172 |
-
4. **Source Separation**: TBD
|
| 173 |
-
|
| 174 |
-
## Appendix: Raw Test Data
|
| 175 |
-
|
| 176 |
-
Full test results JSON: `/tmp/rescored/accuracy_test_results.json`
|
| 177 |
-
|
| 178 |
-
Individual test outputs in: `/tmp/rescored/temp/accuracy_test_*/`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
docs/testing/failure-modes.md
DELETED
|
@@ -1,216 +0,0 @@
|
|
| 1 |
-
# Failure Mode Analysis
|
| 2 |
-
|
| 3 |
-
**Date**: 2024-12-24
|
| 4 |
-
**Test Suite**: Phase 2 Accuracy Baseline (10 videos)
|
| 5 |
-
**Pipeline Version**: Phase 1 Complete + Bug Fixes
|
| 6 |
-
|
| 7 |
-
## Executive Summary
|
| 8 |
-
|
| 9 |
-
Initial accuracy testing revealed **3 major failure categories** affecting 9 out of 10 test videos:
|
| 10 |
-
|
| 11 |
-
1. **Video Availability** (30% of failures) - YouTube blocking/copyright
|
| 12 |
-
2. **Code Bugs** (60% of failures) - NoneType errors and 2048th note duration issues
|
| 13 |
-
3. **MusicXML Export** (20% of failures) - Impossible duration errors
|
| 14 |
-
|
| 15 |
-
**All code bugs have been fixed.** Success rate expected to improve significantly with re-run.
|
| 16 |
-
|
| 17 |
-
## Failure Categories
|
| 18 |
-
|
| 19 |
-
### 1. Video Availability Issues (3 videos - 30%)
|
| 20 |
-
|
| 21 |
-
**Videos Affected:**
|
| 22 |
-
- `twinkle_twinkle` - "Video unavailable"
|
| 23 |
-
- `fur_elise` - "Video unavailable"
|
| 24 |
-
- `jazz_blues` - "Blocked in your country on copyright grounds"
|
| 25 |
-
|
| 26 |
-
**Root Cause:** YouTube access restrictions, not pipeline issues
|
| 27 |
-
|
| 28 |
-
**Mitigation:**
|
| 29 |
-
- Replace with alternative videos for same difficulty level
|
| 30 |
-
- Use Creative Commons licensed videos
|
| 31 |
-
- Host test videos on alternative platforms
|
| 32 |
-
|
| 33 |
-
**Impact:** Not a pipeline issue - will replace test videos
|
| 34 |
-
|
| 35 |
-
---
|
| 36 |
-
|
| 37 |
-
### 2. Code Bugs - Fixed β
(6 videos - 60%)
|
| 38 |
-
|
| 39 |
-
#### Bug 2a: NoneType Velocity Comparison (4 videos)
|
| 40 |
-
|
| 41 |
-
**Error:** `'<' not supported between instances of 'int' and 'NoneType'`
|
| 42 |
-
|
| 43 |
-
**Videos Affected:**
|
| 44 |
-
- `canon_in_d`
|
| 45 |
-
- `river_flows`
|
| 46 |
-
- `moonlight_sonata`
|
| 47 |
-
- `la_campanella`
|
| 48 |
-
|
| 49 |
-
**Root Cause:** In `_deduplicate_overlapping_notes()` at [pipeline.py:403-407](../backend/pipeline.py#L403-L407), the code tried to sort notes by velocity, but `note.volume.velocity` can return `None`.
|
| 50 |
-
|
| 51 |
-
**Fix Applied:**
|
| 52 |
-
```python
|
| 53 |
-
def get_velocity(note):
|
| 54 |
-
if hasattr(note, 'volume') and hasattr(note.volume, 'velocity'):
|
| 55 |
-
vel = note.volume.velocity
|
| 56 |
-
return vel if vel is not None else 64
|
| 57 |
-
return 64
|
| 58 |
-
|
| 59 |
-
pitch_notes.sort(key=lambda x: (x.quarterLength, get_velocity(x)), reverse=True)
|
| 60 |
-
```
|
| 61 |
-
|
| 62 |
-
**Status:** β
Fixed in [pipeline.py:403-409](../backend/pipeline.py#L403-L409)
|
| 63 |
-
|
| 64 |
-
---
|
| 65 |
-
|
| 66 |
-
#### Bug 2b: 2048th Note Duration (2 videos)
|
| 67 |
-
|
| 68 |
-
**Error:** `In part (Piano), measure (X): Cannot convert "2048th" duration to MusicXML (too short).`
|
| 69 |
-
|
| 70 |
-
**Videos Affected:**
|
| 71 |
-
- `chopin_nocturne` (measure 129)
|
| 72 |
-
- `claire_de_lune` (measure 30)
|
| 73 |
-
|
| 74 |
-
**Root Cause:** `music21.makeMeasures()` creates extremely short rests (2048th notes) when filling gaps between notes. MusicXML export fails because these durations are too short to represent.
|
| 75 |
-
|
| 76 |
-
**Previous Attempts:**
|
| 77 |
-
1. β Filtered notes < 64th note (0.0625) before `makeMeasures()` - didn't work
|
| 78 |
-
2. β Removed notes < 64th note after `makeMeasures()` - still had issues
|
| 79 |
-
|
| 80 |
-
**Final Fix:**
|
| 81 |
-
- Increased minimum duration threshold to **128th note** (0.03125)
|
| 82 |
-
- Added logging to show how many notes/rests were removed
|
| 83 |
-
- Applied in `_remove_impossible_durations()` at [pipeline.py:465-502](../backend/pipeline.py#L465-L502)
|
| 84 |
-
|
| 85 |
-
**Status:** β
Fixed - more aggressive filtering
|
| 86 |
-
|
| 87 |
-
---
|
| 88 |
-
|
| 89 |
-
### 3. Successful Test Analysis
|
| 90 |
-
|
| 91 |
-
**Video:** `simple_melody` (C major scale practice, Easy difficulty)
|
| 92 |
-
|
| 93 |
-
**Results:**
|
| 94 |
-
- β
Successfully generated MusicXML
|
| 95 |
-
- **2,588 notes** detected
|
| 96 |
-
- **122 measures** created
|
| 97 |
-
- **245 seconds** duration
|
| 98 |
-
- **99.3% energy** preserved in 'other' stem (excellent separation)
|
| 99 |
-
|
| 100 |
-
**Key Metrics:**
|
| 101 |
-
|
| 102 |
-
| Metric | Value | Assessment |
|
| 103 |
-
|--------|-------|------------|
|
| 104 |
-
| Note density | 5.36 notes/sec | Reasonable for piano |
|
| 105 |
-
| Pitch range | G1 to A6 (62 semitones) | Full piano range |
|
| 106 |
-
| Polyphony | ~1.6 avg, ~6 max | Modest polyphony |
|
| 107 |
-
| Short notes | 271 (21%) under 200ms | Acceptable |
|
| 108 |
-
| Measure warnings | 95/122 (78%) | **High** - timing imperfect |
|
| 109 |
-
|
| 110 |
-
**Measure Timing Issues:**
|
| 111 |
-
|
| 112 |
-
78% of measures showed duration warnings (range 0.0 - 7.83 beats instead of exactly 4.0). Examples:
|
| 113 |
-
- Measure 1: 0.00 beats (empty)
|
| 114 |
-
- Measure 30: 6.41 beats (overfull)
|
| 115 |
-
- Measure 69: 7.33 beats (very overfull)
|
| 116 |
-
- Measure 77: 7.83 beats (worst case)
|
| 117 |
-
|
| 118 |
-
**Root Causes:**
|
| 119 |
-
1. **basic-pitch timing** not aligned to musical beats
|
| 120 |
-
2. **Duration snapping** to nearest valid note value loses precision
|
| 121 |
-
3. **Tempo detection** may be inaccurate
|
| 122 |
-
4. **Polyphonic overlaps** creating extra duration
|
| 123 |
-
|
| 124 |
-
**Impact:** MusicXML loads in notation software but rhythms are imperfect. This is expected with ML-based transcription.
|
| 125 |
-
|
| 126 |
-
---
|
| 127 |
-
|
| 128 |
-
## Common Patterns
|
| 129 |
-
|
| 130 |
-
### Pattern 1: Quiet Audio Detection
|
| 131 |
-
- Diagnostic shows RMS energy of 0.0432 (very quiet)
|
| 132 |
-
- 20% silence in audio
|
| 133 |
-
- basic-pitch may struggle with quiet inputs
|
| 134 |
-
|
| 135 |
-
### Pattern 2: Separation Quality
|
| 136 |
-
- For `simple_melody`: 99.3% energy in 'other' stem β
|
| 137 |
-
- Only 0.2% in 'no_other' stem (excellent isolation)
|
| 138 |
-
- Demucs successfully isolated piano
|
| 139 |
-
|
| 140 |
-
### Pattern 3: Measure Duration Accuracy
|
| 141 |
-
- **Only 22%** of measures have exactly 4.0 beats
|
| 142 |
-
- **78%** show timing deviations
|
| 143 |
-
- Range: -4.0 to +3.83 beats deviation
|
| 144 |
-
- Largest errors in complex sections (likely polyphony)
|
| 145 |
-
|
| 146 |
-
---
|
| 147 |
-
|
| 148 |
-
## Recommendations
|
| 149 |
-
|
| 150 |
-
### Immediate Actions (Phase 2 completion)
|
| 151 |
-
|
| 152 |
-
1. **Replace unavailable videos** with Creative Commons alternatives
|
| 153 |
-
2. **Re-run accuracy suite** with bug fixes
|
| 154 |
-
3. **Document actual baseline** with successful tests
|
| 155 |
-
|
| 156 |
-
### Phase 3 Improvements (Accuracy Tuning)
|
| 157 |
-
|
| 158 |
-
1. **Tempo Detection:**
|
| 159 |
-
- Implement better tempo detection (analyze beat patterns)
|
| 160 |
-
- Consider fixed tempo option for practice scales
|
| 161 |
-
|
| 162 |
-
2. **Quantization:**
|
| 163 |
-
- Improve rhythmic quantization to align with detected beats
|
| 164 |
-
- Consider time signature detection
|
| 165 |
-
|
| 166 |
-
3. **Post-Processing:**
|
| 167 |
-
- Add measure duration normalization
|
| 168 |
-
- Stretch/compress note timings to fit exact 4.0 beats
|
| 169 |
-
|
| 170 |
-
4. **Parameter Tuning:**
|
| 171 |
-
- Test different `onset-threshold` values (current: 0.5)
|
| 172 |
-
- Test different `frame-threshold` values (current: 0.4)
|
| 173 |
-
- Experiment with `minimum-note-length`
|
| 174 |
-
|
| 175 |
-
### Alternative Models (Phase 3 - Optional)
|
| 176 |
-
|
| 177 |
-
Consider testing:
|
| 178 |
-
- **MT3** (Google's Music Transformer) - better rhythm accuracy
|
| 179 |
-
- **htdemucs_6s** - 6-stem model with dedicated piano stem
|
| 180 |
-
- **Omnizart** - specialized for classical music
|
| 181 |
-
|
| 182 |
-
---
|
| 183 |
-
|
| 184 |
-
## Success Criteria
|
| 185 |
-
|
| 186 |
-
After fixes and re-run, we expect:
|
| 187 |
-
|
| 188 |
-
- β
**Video availability**: 7-8 working videos (replacing blocked ones)
|
| 189 |
-
- β
**Code bugs**: 0% failure rate (all fixed)
|
| 190 |
-
- β
**MusicXML export**: 100% success for available videos
|
| 191 |
-
- π― **Overall success rate**: 70-80% (from 10%)
|
| 192 |
-
|
| 193 |
-
Measure timing accuracy will remain imperfect (~78% with warnings) but this is expected for MVP. Phase 3 will focus on improving timing accuracy.
|
| 194 |
-
|
| 195 |
-
---
|
| 196 |
-
|
| 197 |
-
## Appendix: Error Details
|
| 198 |
-
|
| 199 |
-
### NoneType Error Stack Trace
|
| 200 |
-
```
|
| 201 |
-
File "pipeline.py", line 403
|
| 202 |
-
pitch_notes.sort(key=lambda x: (x.quarterLength, x.volume.velocity if ...))
|
| 203 |
-
TypeError: '<' not supported between instances of 'int' and 'NoneType'
|
| 204 |
-
```
|
| 205 |
-
|
| 206 |
-
### 2048th Note Error Stack Trace
|
| 207 |
-
```
|
| 208 |
-
File "music21/musicxml/m21ToXml.py", line 4702
|
| 209 |
-
mxNormalType.text = typeToMusicXMLType(tup.durationNormal.type)
|
| 210 |
-
MusicXMLExportException: In part (Piano), measure (129): Cannot convert "2048th" duration to MusicXML (too short).
|
| 211 |
-
```
|
| 212 |
-
|
| 213 |
-
---
|
| 214 |
-
|
| 215 |
-
**Last Updated**: 2024-12-24
|
| 216 |
-
**Next Review**: After accuracy suite re-run
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
docs/testing/frontend-testing.md
DELETED
|
@@ -1,653 +0,0 @@
|
|
| 1 |
-
# Frontend Testing Guide
|
| 2 |
-
|
| 3 |
-
Comprehensive guide for testing the Rescored frontend.
|
| 4 |
-
|
| 5 |
-
## Table of Contents
|
| 6 |
-
|
| 7 |
-
- [Setup](#setup)
|
| 8 |
-
- [Running Tests](#running-tests)
|
| 9 |
-
- [Test Structure](#test-structure)
|
| 10 |
-
- [Writing Tests](#writing-tests)
|
| 11 |
-
- [Testing Patterns](#testing-patterns)
|
| 12 |
-
- [Troubleshooting](#troubleshooting)
|
| 13 |
-
|
| 14 |
-
## Setup
|
| 15 |
-
|
| 16 |
-
### Install Test Dependencies
|
| 17 |
-
|
| 18 |
-
```bash
|
| 19 |
-
cd frontend
|
| 20 |
-
npm install
|
| 21 |
-
```
|
| 22 |
-
|
| 23 |
-
Test dependencies (already in `package.json`):
|
| 24 |
-
- `vitest`: Test framework
|
| 25 |
-
- `@testing-library/react`: React testing utilities
|
| 26 |
-
- `@testing-library/user-event`: User interaction simulation
|
| 27 |
-
- `@testing-library/jest-dom`: DOM matchers
|
| 28 |
-
- `jsdom`: DOM implementation for Node.js
|
| 29 |
-
- `@vitest/ui`: Interactive test UI
|
| 30 |
-
- `@vitest/coverage-v8`: Coverage reporting
|
| 31 |
-
|
| 32 |
-
### Configuration
|
| 33 |
-
|
| 34 |
-
Test configuration is in `vitest.config.ts`:
|
| 35 |
-
|
| 36 |
-
```typescript
|
| 37 |
-
export default defineConfig({
|
| 38 |
-
test: {
|
| 39 |
-
globals: true,
|
| 40 |
-
environment: 'jsdom',
|
| 41 |
-
setupFiles: ['./src/tests/setup.ts'],
|
| 42 |
-
coverage: {
|
| 43 |
-
provider: 'v8',
|
| 44 |
-
reporter: ['text', 'html', 'lcov'],
|
| 45 |
-
},
|
| 46 |
-
},
|
| 47 |
-
});
|
| 48 |
-
```
|
| 49 |
-
|
| 50 |
-
## Running Tests
|
| 51 |
-
|
| 52 |
-
### Basic Commands
|
| 53 |
-
|
| 54 |
-
```bash
|
| 55 |
-
# Run all tests
|
| 56 |
-
npm test
|
| 57 |
-
|
| 58 |
-
# Run in watch mode
|
| 59 |
-
npm test -- --watch
|
| 60 |
-
|
| 61 |
-
# Run with UI
|
| 62 |
-
npm run test:ui
|
| 63 |
-
|
| 64 |
-
# Run with coverage
|
| 65 |
-
npm run test:coverage
|
| 66 |
-
|
| 67 |
-
# Run specific file
|
| 68 |
-
npm test -- src/tests/api/client.test.ts
|
| 69 |
-
|
| 70 |
-
# Run tests matching pattern
|
| 71 |
-
npm test -- --grep "JobSubmission"
|
| 72 |
-
```
|
| 73 |
-
|
| 74 |
-
### Watch Mode
|
| 75 |
-
|
| 76 |
-
Watch mode automatically re-runs tests when files change:
|
| 77 |
-
|
| 78 |
-
```bash
|
| 79 |
-
npm test -- --watch
|
| 80 |
-
|
| 81 |
-
# Watch specific file
|
| 82 |
-
npm test -- --watch src/tests/components/NotationCanvas.test.tsx
|
| 83 |
-
```
|
| 84 |
-
|
| 85 |
-
### Coverage Reports
|
| 86 |
-
|
| 87 |
-
```bash
|
| 88 |
-
# Generate coverage report
|
| 89 |
-
npm run test:coverage
|
| 90 |
-
|
| 91 |
-
# Open HTML report
|
| 92 |
-
open coverage/index.html
|
| 93 |
-
```
|
| 94 |
-
|
| 95 |
-
## Test Structure
|
| 96 |
-
|
| 97 |
-
### Test Files
|
| 98 |
-
|
| 99 |
-
Component tests live alongside components or in `src/tests/`:
|
| 100 |
-
|
| 101 |
-
```
|
| 102 |
-
frontend/src/
|
| 103 |
-
βββ components/
|
| 104 |
-
β βββ JobSubmission.tsx
|
| 105 |
-
β βββ JobSubmission.test.tsx # Option 1: Co-located
|
| 106 |
-
βββ tests/
|
| 107 |
-
β βββ setup.ts # Test configuration
|
| 108 |
-
β βββ fixtures.ts # Shared test data
|
| 109 |
-
β βββ components/
|
| 110 |
-
β β βββ JobSubmission.test.tsx # Option 2: Separate directory
|
| 111 |
-
β βββ api/
|
| 112 |
-
β βββ client.test.ts
|
| 113 |
-
```
|
| 114 |
-
|
| 115 |
-
### Test Organization
|
| 116 |
-
|
| 117 |
-
```typescript
|
| 118 |
-
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
| 119 |
-
import { render, screen } from '@testing-library/react';
|
| 120 |
-
import Component from './Component';
|
| 121 |
-
|
| 122 |
-
describe('Component', () => {
|
| 123 |
-
beforeEach(() => {
|
| 124 |
-
// Setup before each test
|
| 125 |
-
});
|
| 126 |
-
|
| 127 |
-
describe('Rendering', () => {
|
| 128 |
-
it('should render correctly', () => {
|
| 129 |
-
// Test rendering
|
| 130 |
-
});
|
| 131 |
-
});
|
| 132 |
-
|
| 133 |
-
describe('Interactions', () => {
|
| 134 |
-
it('should handle user input', async () => {
|
| 135 |
-
// Test interactions
|
| 136 |
-
});
|
| 137 |
-
});
|
| 138 |
-
|
| 139 |
-
describe('Edge Cases', () => {
|
| 140 |
-
it('should handle empty state', () => {
|
| 141 |
-
// Test edge cases
|
| 142 |
-
});
|
| 143 |
-
});
|
| 144 |
-
});
|
| 145 |
-
```
|
| 146 |
-
|
| 147 |
-
## Writing Tests
|
| 148 |
-
|
| 149 |
-
### Basic Component Test
|
| 150 |
-
|
| 151 |
-
```typescript
|
| 152 |
-
import { describe, it, expect } from 'vitest';
|
| 153 |
-
import { render, screen } from '@testing-library/react';
|
| 154 |
-
import MyComponent from './MyComponent';
|
| 155 |
-
|
| 156 |
-
describe('MyComponent', () => {
|
| 157 |
-
it('should render text', () => {
|
| 158 |
-
render(<MyComponent text="Hello" />);
|
| 159 |
-
expect(screen.getByText('Hello')).toBeInTheDocument();
|
| 160 |
-
});
|
| 161 |
-
|
| 162 |
-
it('should handle button click', async () => {
|
| 163 |
-
const user = userEvent.setup();
|
| 164 |
-
const handleClick = vi.fn();
|
| 165 |
-
|
| 166 |
-
render(<MyComponent onClick={handleClick} />);
|
| 167 |
-
|
| 168 |
-
const button = screen.getByRole('button');
|
| 169 |
-
await user.click(button);
|
| 170 |
-
|
| 171 |
-
expect(handleClick).toHaveBeenCalledTimes(1);
|
| 172 |
-
});
|
| 173 |
-
});
|
| 174 |
-
```
|
| 175 |
-
|
| 176 |
-
### Testing with User Interactions
|
| 177 |
-
|
| 178 |
-
Use `@testing-library/user-event` for realistic interactions:
|
| 179 |
-
|
| 180 |
-
```typescript
|
| 181 |
-
import userEvent from '@testing-library/user-event';
|
| 182 |
-
|
| 183 |
-
it('should accept user input', async () => {
|
| 184 |
-
const user = userEvent.setup();
|
| 185 |
-
render(<JobSubmission />);
|
| 186 |
-
|
| 187 |
-
const input = screen.getByPlaceholderText(/youtube url/i);
|
| 188 |
-
|
| 189 |
-
// Type into input
|
| 190 |
-
await user.type(input, 'https://www.youtube.com/watch?v=...');
|
| 191 |
-
expect(input).toHaveValue('https://www.youtube.com/watch?v=...');
|
| 192 |
-
|
| 193 |
-
// Click button
|
| 194 |
-
const button = screen.getByRole('button', { name: /submit/i });
|
| 195 |
-
await user.click(button);
|
| 196 |
-
|
| 197 |
-
// Verify action
|
| 198 |
-
await waitFor(() => {
|
| 199 |
-
expect(mockSubmit).toHaveBeenCalled();
|
| 200 |
-
});
|
| 201 |
-
});
|
| 202 |
-
```
|
| 203 |
-
|
| 204 |
-
### Testing Async Operations
|
| 205 |
-
|
| 206 |
-
```typescript
|
| 207 |
-
import { waitFor } from '@testing-library/react';
|
| 208 |
-
|
| 209 |
-
it('should load data', async () => {
|
| 210 |
-
const mockFetch = vi.fn().mockResolvedValue({
|
| 211 |
-
ok: true,
|
| 212 |
-
json: async () => ({ data: 'test' }),
|
| 213 |
-
});
|
| 214 |
-
global.fetch = mockFetch;
|
| 215 |
-
|
| 216 |
-
render(<DataComponent />);
|
| 217 |
-
|
| 218 |
-
await waitFor(() => {
|
| 219 |
-
expect(screen.getByText('test')).toBeInTheDocument();
|
| 220 |
-
});
|
| 221 |
-
});
|
| 222 |
-
```
|
| 223 |
-
|
| 224 |
-
### Mocking Dependencies
|
| 225 |
-
|
| 226 |
-
#### Mock API Client
|
| 227 |
-
|
| 228 |
-
```typescript
|
| 229 |
-
vi.mock('../../api/client', () => ({
|
| 230 |
-
submitTranscription: vi.fn(),
|
| 231 |
-
getJobStatus: vi.fn(),
|
| 232 |
-
downloadScore: vi.fn(),
|
| 233 |
-
}));
|
| 234 |
-
|
| 235 |
-
import { submitTranscription } from '../../api/client';
|
| 236 |
-
|
| 237 |
-
it('should call API', async () => {
|
| 238 |
-
const mockSubmit = vi.mocked(submitTranscription);
|
| 239 |
-
mockSubmit.mockResolvedValue({ job_id: '123', status: 'queued' });
|
| 240 |
-
|
| 241 |
-
// Test component that uses submitTranscription
|
| 242 |
-
// ...
|
| 243 |
-
|
| 244 |
-
expect(mockSubmit).toHaveBeenCalledWith('https://youtube.com/...');
|
| 245 |
-
});
|
| 246 |
-
```
|
| 247 |
-
|
| 248 |
-
#### Mock Zustand Store
|
| 249 |
-
|
| 250 |
-
```typescript
|
| 251 |
-
import { renderHook, act } from '@testing-library/react';
|
| 252 |
-
import { useScoreStore } from '../../store/scoreStore';
|
| 253 |
-
|
| 254 |
-
it('should update store', () => {
|
| 255 |
-
const { result } = renderHook(() => useScoreStore());
|
| 256 |
-
|
| 257 |
-
act(() => {
|
| 258 |
-
result.current.setMusicXML('<musicxml>...</musicxml>');
|
| 259 |
-
});
|
| 260 |
-
|
| 261 |
-
expect(result.current.musicXML).toBe('<musicxml>...</musicxml>');
|
| 262 |
-
});
|
| 263 |
-
```
|
| 264 |
-
|
| 265 |
-
#### Mock VexFlow
|
| 266 |
-
|
| 267 |
-
```typescript
|
| 268 |
-
// In setup.ts
|
| 269 |
-
vi.mock('vexflow', () => ({
|
| 270 |
-
Flow: {
|
| 271 |
-
Renderer: vi.fn(() => ({
|
| 272 |
-
resize: vi.fn(),
|
| 273 |
-
getContext: vi.fn(() => ({
|
| 274 |
-
clear: vi.fn(),
|
| 275 |
-
setFont: vi.fn(),
|
| 276 |
-
})),
|
| 277 |
-
})),
|
| 278 |
-
Stave: vi.fn(() => ({
|
| 279 |
-
addClef: vi.fn().mockReturnThis(),
|
| 280 |
-
addTimeSignature: vi.fn().mockReturnThis(),
|
| 281 |
-
setContext: vi.fn().mockReturnThis(),
|
| 282 |
-
draw: vi.fn(),
|
| 283 |
-
})),
|
| 284 |
-
},
|
| 285 |
-
}));
|
| 286 |
-
```
|
| 287 |
-
|
| 288 |
-
## Testing Patterns
|
| 289 |
-
|
| 290 |
-
### Testing Form Submission
|
| 291 |
-
|
| 292 |
-
```typescript
|
| 293 |
-
it('should submit form with valid data', async () => {
|
| 294 |
-
const user = userEvent.setup();
|
| 295 |
-
const onSubmit = vi.fn();
|
| 296 |
-
|
| 297 |
-
render(<Form onSubmit={onSubmit} />);
|
| 298 |
-
|
| 299 |
-
// Fill out form
|
| 300 |
-
await user.type(screen.getByLabelText(/url/i), 'https://youtube.com/...');
|
| 301 |
-
|
| 302 |
-
// Submit
|
| 303 |
-
await user.click(screen.getByRole('button', { name: /submit/i }));
|
| 304 |
-
|
| 305 |
-
// Verify
|
| 306 |
-
await waitFor(() => {
|
| 307 |
-
expect(onSubmit).toHaveBeenCalledWith({
|
| 308 |
-
url: 'https://youtube.com/...',
|
| 309 |
-
});
|
| 310 |
-
});
|
| 311 |
-
});
|
| 312 |
-
```
|
| 313 |
-
|
| 314 |
-
### Testing Error States
|
| 315 |
-
|
| 316 |
-
```typescript
|
| 317 |
-
it('should show error message', async () => {
|
| 318 |
-
const mockFetch = vi.fn().mockRejectedValue(new Error('Network error'));
|
| 319 |
-
global.fetch = mockFetch;
|
| 320 |
-
|
| 321 |
-
render(<Component />);
|
| 322 |
-
|
| 323 |
-
await waitFor(() => {
|
| 324 |
-
expect(screen.getByText(/network error/i)).toBeInTheDocument();
|
| 325 |
-
});
|
| 326 |
-
});
|
| 327 |
-
```
|
| 328 |
-
|
| 329 |
-
### Testing Loading States
|
| 330 |
-
|
| 331 |
-
```typescript
|
| 332 |
-
it('should show loading indicator', async () => {
|
| 333 |
-
const mockFetch = vi.fn(() =>
|
| 334 |
-
new Promise(resolve => setTimeout(() => resolve({ ok: true }), 100))
|
| 335 |
-
);
|
| 336 |
-
global.fetch = mockFetch;
|
| 337 |
-
|
| 338 |
-
render(<Component />);
|
| 339 |
-
|
| 340 |
-
// Should show loading
|
| 341 |
-
expect(screen.getByText(/loading/i)).toBeInTheDocument();
|
| 342 |
-
|
| 343 |
-
// Should hide loading after data loads
|
| 344 |
-
await waitFor(() => {
|
| 345 |
-
expect(screen.queryByText(/loading/i)).not.toBeInTheDocument();
|
| 346 |
-
});
|
| 347 |
-
});
|
| 348 |
-
```
|
| 349 |
-
|
| 350 |
-
### Testing WebSocket Connections
|
| 351 |
-
|
| 352 |
-
```typescript
|
| 353 |
-
it('should handle WebSocket messages', () => {
|
| 354 |
-
const mockWS = {
|
| 355 |
-
addEventListener: vi.fn(),
|
| 356 |
-
send: vi.fn(),
|
| 357 |
-
close: vi.fn(),
|
| 358 |
-
};
|
| 359 |
-
|
| 360 |
-
global.WebSocket = vi.fn(() => mockWS) as any;
|
| 361 |
-
|
| 362 |
-
render(<WebSocketComponent />);
|
| 363 |
-
|
| 364 |
-
// Get message handler
|
| 365 |
-
const messageHandler = mockWS.addEventListener.mock.calls.find(
|
| 366 |
-
call => call[0] === 'message'
|
| 367 |
-
)?.[1];
|
| 368 |
-
|
| 369 |
-
// Simulate message
|
| 370 |
-
messageHandler?.({ data: JSON.stringify({ type: 'progress', progress: 50 }) });
|
| 371 |
-
|
| 372 |
-
// Verify UI updated
|
| 373 |
-
expect(screen.getByText(/50%/)).toBeInTheDocument();
|
| 374 |
-
});
|
| 375 |
-
```
|
| 376 |
-
|
| 377 |
-
### Testing Conditional Rendering
|
| 378 |
-
|
| 379 |
-
```typescript
|
| 380 |
-
it('should render different states', () => {
|
| 381 |
-
const { rerender } = render(<StatusIndicator status="loading" />);
|
| 382 |
-
expect(screen.getByText(/loading/i)).toBeInTheDocument();
|
| 383 |
-
|
| 384 |
-
rerender(<StatusIndicator status="success" />);
|
| 385 |
-
expect(screen.getByText(/success/i)).toBeInTheDocument();
|
| 386 |
-
|
| 387 |
-
rerender(<StatusIndicator status="error" />);
|
| 388 |
-
expect(screen.getByText(/error/i)).toBeInTheDocument();
|
| 389 |
-
});
|
| 390 |
-
```
|
| 391 |
-
|
| 392 |
-
### Testing Canvas/VexFlow Components
|
| 393 |
-
|
| 394 |
-
```typescript
|
| 395 |
-
it('should render notation', () => {
|
| 396 |
-
// Mock canvas context
|
| 397 |
-
const mockContext = {
|
| 398 |
-
fillRect: vi.fn(),
|
| 399 |
-
clearRect: vi.fn(),
|
| 400 |
-
beginPath: vi.fn(),
|
| 401 |
-
stroke: vi.fn(),
|
| 402 |
-
};
|
| 403 |
-
|
| 404 |
-
HTMLCanvasElement.prototype.getContext = vi.fn(() => mockContext) as any;
|
| 405 |
-
|
| 406 |
-
const { container } = render(<NotationCanvas musicXML={sampleXML} />);
|
| 407 |
-
|
| 408 |
-
// Verify canvas or SVG exists
|
| 409 |
-
const canvas = container.querySelector('canvas');
|
| 410 |
-
expect(canvas).toBeInTheDocument();
|
| 411 |
-
});
|
| 412 |
-
```
|
| 413 |
-
|
| 414 |
-
### Snapshot Testing
|
| 415 |
-
|
| 416 |
-
Use snapshots for stable UI components:
|
| 417 |
-
|
| 418 |
-
```typescript
|
| 419 |
-
it('should match snapshot', () => {
|
| 420 |
-
const { container } = render(<StaticComponent />);
|
| 421 |
-
expect(container).toMatchSnapshot();
|
| 422 |
-
});
|
| 423 |
-
```
|
| 424 |
-
|
| 425 |
-
**Update snapshots:**
|
| 426 |
-
```bash
|
| 427 |
-
npm test -- -u
|
| 428 |
-
```
|
| 429 |
-
|
| 430 |
-
## Testing Custom Hooks
|
| 431 |
-
|
| 432 |
-
```typescript
|
| 433 |
-
import { renderHook, act } from '@testing-library/react';
|
| 434 |
-
import { useCustomHook } from './useCustomHook';
|
| 435 |
-
|
| 436 |
-
it('should handle state changes', () => {
|
| 437 |
-
const { result } = renderHook(() => useCustomHook());
|
| 438 |
-
|
| 439 |
-
expect(result.current.count).toBe(0);
|
| 440 |
-
|
| 441 |
-
act(() => {
|
| 442 |
-
result.current.increment();
|
| 443 |
-
});
|
| 444 |
-
|
| 445 |
-
expect(result.current.count).toBe(1);
|
| 446 |
-
});
|
| 447 |
-
```
|
| 448 |
-
|
| 449 |
-
## Accessibility Testing
|
| 450 |
-
|
| 451 |
-
```typescript
|
| 452 |
-
it('should be accessible', () => {
|
| 453 |
-
render(<Component />);
|
| 454 |
-
|
| 455 |
-
// Check for proper labels
|
| 456 |
-
expect(screen.getByLabelText(/input field/i)).toBeInTheDocument();
|
| 457 |
-
|
| 458 |
-
// Check for ARIA attributes
|
| 459 |
-
expect(screen.getByRole('button')).toHaveAttribute('aria-label', 'Submit');
|
| 460 |
-
|
| 461 |
-
// Check keyboard navigation
|
| 462 |
-
const button = screen.getByRole('button');
|
| 463 |
-
button.focus();
|
| 464 |
-
expect(button).toHaveFocus();
|
| 465 |
-
});
|
| 466 |
-
```
|
| 467 |
-
|
| 468 |
-
## Troubleshooting
|
| 469 |
-
|
| 470 |
-
### Common Issues
|
| 471 |
-
|
| 472 |
-
**Canvas/VexFlow Errors**
|
| 473 |
-
|
| 474 |
-
```typescript
|
| 475 |
-
// Mock canvas in setup.ts
|
| 476 |
-
beforeEach(() => {
|
| 477 |
-
HTMLCanvasElement.prototype.getContext = vi.fn(() => ({
|
| 478 |
-
fillRect: vi.fn(),
|
| 479 |
-
// ... other canvas methods
|
| 480 |
-
})) as any;
|
| 481 |
-
});
|
| 482 |
-
```
|
| 483 |
-
|
| 484 |
-
**WebSocket Errors**
|
| 485 |
-
|
| 486 |
-
```typescript
|
| 487 |
-
// Mock WebSocket in setup.ts
|
| 488 |
-
global.WebSocket = vi.fn(() => ({
|
| 489 |
-
addEventListener: vi.fn(),
|
| 490 |
-
send: vi.fn(),
|
| 491 |
-
close: vi.fn(),
|
| 492 |
-
readyState: WebSocket.OPEN,
|
| 493 |
-
})) as any;
|
| 494 |
-
```
|
| 495 |
-
|
| 496 |
-
**Module Import Errors**
|
| 497 |
-
|
| 498 |
-
```typescript
|
| 499 |
-
// Use vi.mock at top of test file
|
| 500 |
-
vi.mock('external-module', () => ({
|
| 501 |
-
default: vi.fn(),
|
| 502 |
-
namedExport: vi.fn(),
|
| 503 |
-
}));
|
| 504 |
-
```
|
| 505 |
-
|
| 506 |
-
**Async Test Timeouts**
|
| 507 |
-
|
| 508 |
-
```typescript
|
| 509 |
-
// Increase timeout for slow tests
|
| 510 |
-
it('slow test', async () => {
|
| 511 |
-
// ...
|
| 512 |
-
}, { timeout: 10000 });
|
| 513 |
-
```
|
| 514 |
-
|
| 515 |
-
### Debugging Tests
|
| 516 |
-
|
| 517 |
-
```bash
|
| 518 |
-
# Run with UI for interactive debugging
|
| 519 |
-
npm run test:ui
|
| 520 |
-
|
| 521 |
-
# Run specific test in watch mode
|
| 522 |
-
npm test -- --watch --grep "test name"
|
| 523 |
-
|
| 524 |
-
# Debug in VS Code
|
| 525 |
-
# Add breakpoint and use "Debug Test" code lens
|
| 526 |
-
```
|
| 527 |
-
|
| 528 |
-
### Performance Issues
|
| 529 |
-
|
| 530 |
-
```bash
|
| 531 |
-
# Identify slow tests
|
| 532 |
-
npm test -- --reporter=verbose
|
| 533 |
-
|
| 534 |
-
# Run tests in parallel (default)
|
| 535 |
-
npm test
|
| 536 |
-
|
| 537 |
-
# Run sequentially if needed
|
| 538 |
-
npm test -- --no-threads
|
| 539 |
-
```
|
| 540 |
-
|
| 541 |
-
## Best Practices
|
| 542 |
-
|
| 543 |
-
1. **Test user behavior, not implementation**: Focus on what users see and do
|
| 544 |
-
2. **Use accessible queries**: Prefer `getByRole`, `getByLabelText` over `getByTestId`
|
| 545 |
-
3. **Avoid testing implementation details**: Don't test internal state or methods
|
| 546 |
-
4. **Keep tests simple**: Each test should verify one thing
|
| 547 |
-
5. **Use realistic data**: Test with data similar to production
|
| 548 |
-
6. **Clean up**: Always clean up side effects (timers, listeners)
|
| 549 |
-
7. **Mock external dependencies**: Don't make real API calls or WebSocket connections
|
| 550 |
-
8. **Test edge cases**: Empty states, errors, loading states
|
| 551 |
-
|
| 552 |
-
## Query Priority
|
| 553 |
-
|
| 554 |
-
Use queries in this order (most preferred first):
|
| 555 |
-
|
| 556 |
-
1. **Accessible Queries**:
|
| 557 |
-
- `getByRole`
|
| 558 |
-
- `getByLabelText`
|
| 559 |
-
- `getByPlaceholderText`
|
| 560 |
-
- `getByText`
|
| 561 |
-
|
| 562 |
-
2. **Semantic Queries**:
|
| 563 |
-
- `getByAltText`
|
| 564 |
-
- `getByTitle`
|
| 565 |
-
|
| 566 |
-
3. **Test IDs** (last resort):
|
| 567 |
-
- `getByTestId`
|
| 568 |
-
|
| 569 |
-
Example:
|
| 570 |
-
|
| 571 |
-
```typescript
|
| 572 |
-
// Good
|
| 573 |
-
const button = screen.getByRole('button', { name: /submit/i });
|
| 574 |
-
const input = screen.getByLabelText(/email/i);
|
| 575 |
-
|
| 576 |
-
// Acceptable
|
| 577 |
-
const image = screen.getByAltText('Logo');
|
| 578 |
-
|
| 579 |
-
// Last resort
|
| 580 |
-
const element = screen.getByTestId('custom-element');
|
| 581 |
-
```
|
| 582 |
-
|
| 583 |
-
## Example Test File
|
| 584 |
-
|
| 585 |
-
Complete example showing best practices:
|
| 586 |
-
|
| 587 |
-
```typescript
|
| 588 |
-
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
| 589 |
-
import { render, screen, waitFor } from '@testing-library/react';
|
| 590 |
-
import userEvent from '@testing-library/user-event';
|
| 591 |
-
import JobSubmission from './JobSubmission';
|
| 592 |
-
|
| 593 |
-
vi.mock('../../api/client', () => ({
|
| 594 |
-
submitTranscription: vi.fn(),
|
| 595 |
-
}));
|
| 596 |
-
|
| 597 |
-
import { submitTranscription } from '../../api/client';
|
| 598 |
-
|
| 599 |
-
describe('JobSubmission', () => {
|
| 600 |
-
beforeEach(() => {
|
| 601 |
-
vi.clearAllMocks();
|
| 602 |
-
});
|
| 603 |
-
|
| 604 |
-
describe('Rendering', () => {
|
| 605 |
-
it('should render input and button', () => {
|
| 606 |
-
render(<JobSubmission />);
|
| 607 |
-
|
| 608 |
-
expect(screen.getByPlaceholderText(/youtube url/i)).toBeInTheDocument();
|
| 609 |
-
expect(screen.getByRole('button', { name: /transcribe/i })).toBeInTheDocument();
|
| 610 |
-
});
|
| 611 |
-
});
|
| 612 |
-
|
| 613 |
-
describe('User Interactions', () => {
|
| 614 |
-
it('should accept and submit valid URL', async () => {
|
| 615 |
-
const user = userEvent.setup();
|
| 616 |
-
const mockSubmit = vi.mocked(submitTranscription);
|
| 617 |
-
mockSubmit.mockResolvedValue({ job_id: '123', status: 'queued' });
|
| 618 |
-
|
| 619 |
-
render(<JobSubmission />);
|
| 620 |
-
|
| 621 |
-
const input = screen.getByPlaceholderText(/youtube url/i);
|
| 622 |
-
const button = screen.getByRole('button', { name: /transcribe/i });
|
| 623 |
-
|
| 624 |
-
await user.type(input, 'https://www.youtube.com/watch?v=...');
|
| 625 |
-
await user.click(button);
|
| 626 |
-
|
| 627 |
-
await waitFor(() => {
|
| 628 |
-
expect(mockSubmit).toHaveBeenCalledWith(
|
| 629 |
-
'https://www.youtube.com/watch?v=...',
|
| 630 |
-
expect.any(Object)
|
| 631 |
-
);
|
| 632 |
-
});
|
| 633 |
-
});
|
| 634 |
-
});
|
| 635 |
-
|
| 636 |
-
describe('Error Handling', () => {
|
| 637 |
-
it('should show error for invalid URL', async () => {
|
| 638 |
-
const user = userEvent.setup();
|
| 639 |
-
render(<JobSubmission />);
|
| 640 |
-
|
| 641 |
-
const input = screen.getByPlaceholderText(/youtube url/i);
|
| 642 |
-
const button = screen.getByRole('button', { name: /transcribe/i });
|
| 643 |
-
|
| 644 |
-
await user.type(input, 'invalid-url');
|
| 645 |
-
await user.click(button);
|
| 646 |
-
|
| 647 |
-
await waitFor(() => {
|
| 648 |
-
expect(screen.getByText(/invalid/i)).toBeInTheDocument();
|
| 649 |
-
});
|
| 650 |
-
});
|
| 651 |
-
});
|
| 652 |
-
});
|
| 653 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
docs/testing/overview.md
DELETED
|
@@ -1,315 +0,0 @@
|
|
| 1 |
-
# Testing Guide
|
| 2 |
-
|
| 3 |
-
Complete testing guide for the Rescored project.
|
| 4 |
-
|
| 5 |
-
## Quick Start
|
| 6 |
-
|
| 7 |
-
### Backend Tests
|
| 8 |
-
|
| 9 |
-
```bash
|
| 10 |
-
cd backend
|
| 11 |
-
pip install -r requirements-test.txt
|
| 12 |
-
pytest --cov
|
| 13 |
-
```
|
| 14 |
-
|
| 15 |
-
### Frontend Tests
|
| 16 |
-
|
| 17 |
-
```bash
|
| 18 |
-
cd frontend
|
| 19 |
-
npm install
|
| 20 |
-
npm test
|
| 21 |
-
```
|
| 22 |
-
|
| 23 |
-
## Testing Philosophy
|
| 24 |
-
|
| 25 |
-
Rescored follows these testing principles:
|
| 26 |
-
|
| 27 |
-
1. **Test behavior, not implementation** - Verify what the code does, not how
|
| 28 |
-
2. **Write tests that give confidence** - Focus on high-value tests that catch real bugs
|
| 29 |
-
3. **Keep tests maintainable** - Tests should be easy to understand and modify
|
| 30 |
-
4. **Test at the right level** - Unit tests for logic, integration tests for workflows
|
| 31 |
-
5. **Fast feedback loops** - Tests should run quickly to enable rapid development
|
| 32 |
-
|
| 33 |
-
## Test Suites
|
| 34 |
-
|
| 35 |
-
### Backend Test Suite (`backend/tests/`)
|
| 36 |
-
|
| 37 |
-
- **Unit Tests** (`test_utils.py`) - URL validation, video availability checks
|
| 38 |
-
- **API Tests** (`test_api.py`) - FastAPI endpoints, WebSocket connections
|
| 39 |
-
- **Pipeline Tests** (`test_pipeline.py`) - Audio processing, transcription, MusicXML generation
|
| 40 |
-
- **Task Tests** (`test_tasks.py`) - Celery workers, job processing, progress updates
|
| 41 |
-
|
| 42 |
-
**Features**: Mocked external dependencies (yt-dlp, Redis, ML models), temporary file handling, parametrized tests, coverage reporting
|
| 43 |
-
|
| 44 |
-
### Frontend Test Suite (`frontend/src/tests/`)
|
| 45 |
-
|
| 46 |
-
- **API Client Tests** (`api/client.test.ts`) - HTTP requests, WebSocket connections
|
| 47 |
-
- **Component Tests** (`components/`) - JobSubmission, NotationCanvas, PlaybackControls
|
| 48 |
-
- **Store Tests** (`store/useScoreStore.test.ts`) - Zustand state management
|
| 49 |
-
|
| 50 |
-
**Features**: React Testing Library, user event simulation, mocked VexFlow and Tone.js, coverage reporting
|
| 51 |
-
|
| 52 |
-
## Coverage Goals
|
| 53 |
-
|
| 54 |
-
| Component | Target | Priority |
|
| 55 |
-
|-----------|--------|----------|
|
| 56 |
-
| Backend Utils | 90%+ | High |
|
| 57 |
-
| Backend Pipeline | 85%+ | Critical |
|
| 58 |
-
| Backend API | 80%+ | High |
|
| 59 |
-
| Frontend API Client | 85%+ | Critical |
|
| 60 |
-
| Frontend Components | 75%+ | High |
|
| 61 |
-
| Frontend Store | 80%+ | High |
|
| 62 |
-
|
| 63 |
-
## Running Tests
|
| 64 |
-
|
| 65 |
-
### Backend
|
| 66 |
-
|
| 67 |
-
```bash
|
| 68 |
-
# Run all tests
|
| 69 |
-
pytest
|
| 70 |
-
|
| 71 |
-
# With coverage
|
| 72 |
-
pytest --cov --cov-report=html
|
| 73 |
-
|
| 74 |
-
# Specific tests
|
| 75 |
-
pytest tests/test_utils.py
|
| 76 |
-
pytest tests/test_utils.py::TestValidateYouTubeURL::test_valid_watch_url
|
| 77 |
-
|
| 78 |
-
# By category
|
| 79 |
-
pytest -m unit # Only unit tests
|
| 80 |
-
pytest -m integration # Only integration tests
|
| 81 |
-
pytest -m "not slow" # Exclude slow tests
|
| 82 |
-
pytest -m "not gpu" # Exclude GPU tests
|
| 83 |
-
|
| 84 |
-
# Debugging
|
| 85 |
-
pytest -vv # Verbose output
|
| 86 |
-
pytest -s # Show print statements
|
| 87 |
-
pytest --pdb # Drop into debugger on failure
|
| 88 |
-
pytest --lf # Run last failed tests
|
| 89 |
-
```
|
| 90 |
-
|
| 91 |
-
### Frontend
|
| 92 |
-
|
| 93 |
-
```bash
|
| 94 |
-
# Run all tests
|
| 95 |
-
npm test
|
| 96 |
-
|
| 97 |
-
# Watch mode
|
| 98 |
-
npm test -- --watch
|
| 99 |
-
|
| 100 |
-
# With UI
|
| 101 |
-
npm run test:ui
|
| 102 |
-
|
| 103 |
-
# With coverage
|
| 104 |
-
npm run test:coverage
|
| 105 |
-
|
| 106 |
-
# Specific tests
|
| 107 |
-
npm test -- src/tests/api/client.test.ts
|
| 108 |
-
npm test -- --grep "JobSubmission"
|
| 109 |
-
```
|
| 110 |
-
|
| 111 |
-
## Test Structure
|
| 112 |
-
|
| 113 |
-
### Backend
|
| 114 |
-
|
| 115 |
-
```
|
| 116 |
-
backend/tests/
|
| 117 |
-
βββ conftest.py # Shared fixtures (temp dirs, mock Redis, sample files)
|
| 118 |
-
βββ test_utils.py # Utility function tests
|
| 119 |
-
βββ test_api.py # API endpoint tests
|
| 120 |
-
βββ test_pipeline.py # Audio processing tests
|
| 121 |
-
βββ test_tasks.py # Celery task tests
|
| 122 |
-
```
|
| 123 |
-
|
| 124 |
-
### Frontend
|
| 125 |
-
|
| 126 |
-
```
|
| 127 |
-
frontend/src/tests/
|
| 128 |
-
βββ setup.ts # Test configuration (mocks for VexFlow, Tone.js, WebSocket)
|
| 129 |
-
βββ fixtures.ts # Shared test data (MusicXML, job responses, etc.)
|
| 130 |
-
βββ api/client.test.ts
|
| 131 |
-
βββ components/
|
| 132 |
-
β βββ JobSubmission.test.tsx
|
| 133 |
-
β βββ NotationCanvas.test.tsx
|
| 134 |
-
β βββ PlaybackControls.test.tsx
|
| 135 |
-
βββ store/useScoreStore.test.ts
|
| 136 |
-
```
|
| 137 |
-
|
| 138 |
-
## Common Patterns
|
| 139 |
-
|
| 140 |
-
### Backend Testing
|
| 141 |
-
|
| 142 |
-
```python
|
| 143 |
-
# Mock external services
|
| 144 |
-
@patch('pipeline.yt_dlp.YoutubeDL')
|
| 145 |
-
def test_download_audio(mock_ydl_class, temp_storage_dir):
|
| 146 |
-
mock_ydl = MagicMock()
|
| 147 |
-
mock_ydl_class.return_value.__enter__.return_value = mock_ydl
|
| 148 |
-
|
| 149 |
-
result = download_audio("https://youtube.com/...", temp_storage_dir)
|
| 150 |
-
|
| 151 |
-
assert result.exists()
|
| 152 |
-
assert result.suffix == ".wav"
|
| 153 |
-
|
| 154 |
-
# Test API endpoints
|
| 155 |
-
def test_submit_transcription(test_client):
|
| 156 |
-
response = test_client.post(
|
| 157 |
-
"/api/v1/transcribe",
|
| 158 |
-
json={"youtube_url": "https://www.youtube.com/watch?v=..."}
|
| 159 |
-
)
|
| 160 |
-
|
| 161 |
-
assert response.status_code == 201
|
| 162 |
-
assert "job_id" in response.json()
|
| 163 |
-
|
| 164 |
-
# Parametrized tests
|
| 165 |
-
@pytest.mark.parametrize("url,expected_valid", [
|
| 166 |
-
("https://www.youtube.com/watch?v=dQw4w9WgXcQ", True),
|
| 167 |
-
("https://vimeo.com/12345", False),
|
| 168 |
-
])
|
| 169 |
-
def test_url_validation(url, expected_valid):
|
| 170 |
-
is_valid, _ = validate_youtube_url(url)
|
| 171 |
-
assert is_valid == expected_valid
|
| 172 |
-
```
|
| 173 |
-
|
| 174 |
-
### Frontend Testing
|
| 175 |
-
|
| 176 |
-
```typescript
|
| 177 |
-
// Test components with user interaction
|
| 178 |
-
it('should submit form', async () => {
|
| 179 |
-
const user = userEvent.setup();
|
| 180 |
-
const onSubmit = vi.fn();
|
| 181 |
-
|
| 182 |
-
render(<JobSubmission onSubmit={onSubmit} />);
|
| 183 |
-
|
| 184 |
-
const input = screen.getByPlaceholderText(/youtube url/i);
|
| 185 |
-
await user.type(input, 'https://www.youtube.com/watch?v=...');
|
| 186 |
-
|
| 187 |
-
const button = screen.getByRole('button', { name: /submit/i });
|
| 188 |
-
await user.click(button);
|
| 189 |
-
|
| 190 |
-
await waitFor(() => {
|
| 191 |
-
expect(onSubmit).toHaveBeenCalled();
|
| 192 |
-
});
|
| 193 |
-
});
|
| 194 |
-
|
| 195 |
-
// Mock API calls
|
| 196 |
-
vi.mock('../../api/client', () => ({
|
| 197 |
-
submitTranscription: vi.fn(),
|
| 198 |
-
}));
|
| 199 |
-
|
| 200 |
-
it('should call API', async () => {
|
| 201 |
-
const mockSubmit = vi.mocked(submitTranscription);
|
| 202 |
-
mockSubmit.mockResolvedValue({ job_id: '123' });
|
| 203 |
-
|
| 204 |
-
// Test component that uses submitTranscription
|
| 205 |
-
// ...
|
| 206 |
-
});
|
| 207 |
-
|
| 208 |
-
// Test store
|
| 209 |
-
it('should update store', () => {
|
| 210 |
-
const { result } = renderHook(() => useScoreStore());
|
| 211 |
-
|
| 212 |
-
act(() => {
|
| 213 |
-
result.current.setMusicXML('<musicxml>...</musicxml>');
|
| 214 |
-
});
|
| 215 |
-
|
| 216 |
-
expect(result.current.musicXML).toBe('<musicxml>...</musicxml>');
|
| 217 |
-
});
|
| 218 |
-
```
|
| 219 |
-
|
| 220 |
-
## Mocking Strategy
|
| 221 |
-
|
| 222 |
-
### Backend
|
| 223 |
-
- **External Services**: Mock yt-dlp, Redis, Celery
|
| 224 |
-
- **ML Models**: Mock Demucs and basic-pitch for fast tests
|
| 225 |
-
- **File System**: Use temporary directories
|
| 226 |
-
|
| 227 |
-
### Frontend
|
| 228 |
-
- **API Calls**: Mock fetch with vitest
|
| 229 |
-
- **WebSockets**: Mock WebSocket connections
|
| 230 |
-
- **Browser APIs**: Mock Canvas, Audio, localStorage
|
| 231 |
-
- **Libraries**: Mock VexFlow, Tone.js
|
| 232 |
-
|
| 233 |
-
## Best Practices
|
| 234 |
-
|
| 235 |
-
### General
|
| 236 |
-
1. β
Write descriptive test names that explain the scenario
|
| 237 |
-
2. β
Keep tests simple and focused (one thing per test)
|
| 238 |
-
3. β
Use Arrange-Act-Assert structure
|
| 239 |
-
4. β
Make tests independent (no shared state)
|
| 240 |
-
5. β
Clean up resources (files, connections, timers)
|
| 241 |
-
6. β
Mock external dependencies
|
| 242 |
-
7. β
Add tests when fixing bugs
|
| 243 |
-
8. β
Keep test code as clean as production code
|
| 244 |
-
|
| 245 |
-
### Backend-Specific
|
| 246 |
-
- Use pytest fixtures for shared setup
|
| 247 |
-
- Mock yt-dlp, Redis, Celery, ML models
|
| 248 |
-
- Use temporary directories for file operations
|
| 249 |
-
- Mark slow/GPU tests with `@pytest.mark.slow` and `@pytest.mark.gpu`
|
| 250 |
-
- Test both success and error paths
|
| 251 |
-
|
| 252 |
-
### Frontend-Specific
|
| 253 |
-
- Test user behavior, not implementation details
|
| 254 |
-
- Use accessible queries: `getByRole`, `getByLabelText` (not `getByTestId`)
|
| 255 |
-
- Mock API calls and WebSocket connections
|
| 256 |
-
- Test loading states and error handling
|
| 257 |
-
- Clean up side effects (timers, event listeners)
|
| 258 |
-
|
| 259 |
-
## Troubleshooting
|
| 260 |
-
|
| 261 |
-
### Backend
|
| 262 |
-
|
| 263 |
-
**Import errors**
|
| 264 |
-
```bash
|
| 265 |
-
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
|
| 266 |
-
```
|
| 267 |
-
|
| 268 |
-
**Redis connection errors** - Always mock Redis unless testing Redis specifically
|
| 269 |
-
|
| 270 |
-
**GPU tests failing** - Mark with `@pytest.mark.gpu` and skip if unavailable
|
| 271 |
-
|
| 272 |
-
### Frontend
|
| 273 |
-
|
| 274 |
-
**Canvas errors** - Mock canvas context in `setup.ts`
|
| 275 |
-
|
| 276 |
-
**WebSocket errors** - Mock WebSocket in `setup.ts`
|
| 277 |
-
|
| 278 |
-
**Module import errors** - Use `vi.mock()` at top of test file
|
| 279 |
-
|
| 280 |
-
**Async timeouts** - Increase timeout: `it('test', async () => { ... }, { timeout: 10000 })`
|
| 281 |
-
|
| 282 |
-
## Test Performance
|
| 283 |
-
|
| 284 |
-
**Benchmarks:**
|
| 285 |
-
- Unit tests: < 100ms each
|
| 286 |
-
- Full backend suite: < 30 seconds
|
| 287 |
-
- Full frontend suite: < 20 seconds
|
| 288 |
-
|
| 289 |
-
**Optimization:**
|
| 290 |
-
- Mock expensive operations (ML inference, network calls)
|
| 291 |
-
- Use test markers to skip slow tests during development
|
| 292 |
-
- Parallelize tests (pytest-xdist for backend, vitest default)
|
| 293 |
-
- Cache expensive fixtures
|
| 294 |
-
|
| 295 |
-
## CI/CD Integration
|
| 296 |
-
|
| 297 |
-
Tests run automatically on:
|
| 298 |
-
- **Pull Requests** - All tests must pass
|
| 299 |
-
- **Main Branch** - Full suite including slow tests
|
| 300 |
-
- **Nightly** - Extended test suite with real YouTube videos
|
| 301 |
-
- **Pre-release** - E2E tests, performance benchmarks
|
| 302 |
-
|
| 303 |
-
## Detailed Guides
|
| 304 |
-
|
| 305 |
-
For detailed information, see:
|
| 306 |
-
- **[Backend Testing Guide](./backend-testing.md)** - In-depth backend testing patterns and examples
|
| 307 |
-
- **[Frontend Testing Guide](./frontend-testing.md)** - In-depth frontend testing patterns and examples
|
| 308 |
-
- **[Test Video Collection](./test-videos.md)** - Curated YouTube videos for testing transcription quality
|
| 309 |
-
|
| 310 |
-
## Resources
|
| 311 |
-
|
| 312 |
-
- [pytest Documentation](https://docs.pytest.org/)
|
| 313 |
-
- [Vitest Documentation](https://vitest.dev/)
|
| 314 |
-
- [React Testing Library](https://testing-library.com/react)
|
| 315 |
-
- [FastAPI Testing](https://fastapi.tiangolo.com/tutorial/testing/)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
docs/testing/test-videos.md
DELETED
|
@@ -1,371 +0,0 @@
|
|
| 1 |
-
# Test Video Collection
|
| 2 |
-
|
| 3 |
-
Curated collection of YouTube videos for testing transcription quality and edge cases.
|
| 4 |
-
|
| 5 |
-
## Table of Contents
|
| 6 |
-
|
| 7 |
-
- [Simple Piano Tests](#simple-piano-tests)
|
| 8 |
-
- [Classical Piano](#classical-piano)
|
| 9 |
-
- [Pop Piano Covers](#pop-piano-covers)
|
| 10 |
-
- [Jazz Piano](#jazz-piano)
|
| 11 |
-
- [Complex/Challenging](#complexchallenging)
|
| 12 |
-
- [Edge Cases](#edge-cases)
|
| 13 |
-
- [Testing Criteria](#testing-criteria)
|
| 14 |
-
|
| 15 |
-
## Simple Piano Tests
|
| 16 |
-
|
| 17 |
-
Use these for basic functionality and quick iteration.
|
| 18 |
-
|
| 19 |
-
### 1. Twinkle Twinkle Little Star (Beginner Piano)
|
| 20 |
-
- **Duration**: ~1 minute
|
| 21 |
-
- **Tempo**: Slow (60-80 BPM)
|
| 22 |
-
- **Complexity**: Very simple melody, single notes
|
| 23 |
-
- **Expected Accuracy**: 95%+
|
| 24 |
-
- **Use For**: Smoke tests, basic functionality
|
| 25 |
-
|
| 26 |
-
### 2. Mary Had a Little Lamb
|
| 27 |
-
- **Duration**: ~1 minute
|
| 28 |
-
- **Tempo**: Moderate (100 BPM)
|
| 29 |
-
- **Complexity**: Simple melody with consistent rhythm
|
| 30 |
-
- **Expected Accuracy**: 90%+
|
| 31 |
-
- **Use For**: Key signature detection, basic transcription
|
| 32 |
-
|
| 33 |
-
### 3. Happy Birthday (Piano Solo)
|
| 34 |
-
- **Duration**: ~1 minute
|
| 35 |
-
- **Tempo**: Moderate (120 BPM)
|
| 36 |
-
- **Complexity**: Simple melody with occasional harmony
|
| 37 |
-
- **Expected Accuracy**: 85%+
|
| 38 |
-
- **Use For**: Time signature detection (3/4 time)
|
| 39 |
-
|
| 40 |
-
## Classical Piano
|
| 41 |
-
|
| 42 |
-
Test with well-known classical pieces to verify quality.
|
| 43 |
-
|
| 44 |
-
### 4. Chopin - Nocturne Op. 9 No. 2
|
| 45 |
-
- **Duration**: 4-5 minutes
|
| 46 |
-
- **Tempo**: Andante (60-70 BPM)
|
| 47 |
-
- **Complexity**: Expressive melody with arpeggiated accompaniment
|
| 48 |
-
- **Expected Accuracy**: 75-80%
|
| 49 |
-
- **Use For**:
|
| 50 |
-
- Pedal sustain handling
|
| 51 |
-
- Rubato tempo changes
|
| 52 |
-
- Expressive timing
|
| 53 |
-
|
| 54 |
-
**Challenges**:
|
| 55 |
-
- Overlapping notes from pedal
|
| 56 |
-
- Tempo fluctuations
|
| 57 |
-
- Decorative grace notes
|
| 58 |
-
|
| 59 |
-
### 5. Beethoven - FΓΌr Elise
|
| 60 |
-
- **Duration**: 3 minutes
|
| 61 |
-
- **Tempo**: Poco moto (120-130 BPM)
|
| 62 |
-
- **Complexity**: Famous melody with consistent rhythm
|
| 63 |
-
- **Expected Accuracy**: 80-85%
|
| 64 |
-
- **Use For**:
|
| 65 |
-
- A minor key signature
|
| 66 |
-
- Repeated patterns
|
| 67 |
-
- Multiple sections
|
| 68 |
-
|
| 69 |
-
**Challenges**:
|
| 70 |
-
- Fast 16th note passages
|
| 71 |
-
- Dynamic contrasts
|
| 72 |
-
|
| 73 |
-
### 6. Mozart - Piano Sonata K. 545 (1st Movement)
|
| 74 |
-
- **Duration**: 3-4 minutes
|
| 75 |
-
- **Tempo**: Allegro (120-140 BPM)
|
| 76 |
-
- **Complexity**: Clear melody with Alberti bass
|
| 77 |
-
- **Expected Accuracy**: 75-80%
|
| 78 |
-
- **Use For**:
|
| 79 |
-
- C major scale passages
|
| 80 |
-
- Alberti bass pattern recognition
|
| 81 |
-
- Classical form
|
| 82 |
-
|
| 83 |
-
**Challenges**:
|
| 84 |
-
- Fast running passages
|
| 85 |
-
- Hand coordination
|
| 86 |
-
|
| 87 |
-
## Pop Piano Covers
|
| 88 |
-
|
| 89 |
-
Test with contemporary music to verify modern styles.
|
| 90 |
-
|
| 91 |
-
### 7. Let It Be (Piano Cover)
|
| 92 |
-
- **Duration**: 3-4 minutes
|
| 93 |
-
- **Tempo**: Moderate (76 BPM)
|
| 94 |
-
- **Complexity**: Block chords with melody
|
| 95 |
-
- **Expected Accuracy**: 70-75%
|
| 96 |
-
- **Use For**:
|
| 97 |
-
- Chord detection
|
| 98 |
-
- Popular music transcription
|
| 99 |
-
- Mixed rhythm patterns
|
| 100 |
-
|
| 101 |
-
**Challenges**:
|
| 102 |
-
- Dense chords
|
| 103 |
-
- Vocal line vs accompaniment
|
| 104 |
-
|
| 105 |
-
### 8. Someone Like You (Piano Cover)
|
| 106 |
-
- **Duration**: 4-5 minutes
|
| 107 |
-
- **Tempo**: Slow (67 BPM)
|
| 108 |
-
- **Complexity**: Arpeggiated chords with melody
|
| 109 |
-
- **Expected Accuracy**: 70-75%
|
| 110 |
-
- **Use For**:
|
| 111 |
-
- Sustained notes
|
| 112 |
-
- Emotional expression
|
| 113 |
-
- Modern pop harmony
|
| 114 |
-
|
| 115 |
-
**Challenges**:
|
| 116 |
-
- Overlapping arpeggios
|
| 117 |
-
- Pedal sustain
|
| 118 |
-
|
| 119 |
-
### 9. River Flows in You (Original Piano)
|
| 120 |
-
- **Duration**: 3-4 minutes
|
| 121 |
-
- **Tempo**: Moderato (110 BPM)
|
| 122 |
-
- **Complexity**: Flowing arpeggios with melody
|
| 123 |
-
- **Expected Accuracy**: 75-80%
|
| 124 |
-
- **Use For**:
|
| 125 |
-
- Continuous motion
|
| 126 |
-
- Pattern recognition
|
| 127 |
-
- Popular instrumental
|
| 128 |
-
|
| 129 |
-
**Challenges**:
|
| 130 |
-
- Rapid note sequences
|
| 131 |
-
- Consistent texture
|
| 132 |
-
|
| 133 |
-
## Jazz Piano
|
| 134 |
-
|
| 135 |
-
Test improvisation and complex harmony.
|
| 136 |
-
|
| 137 |
-
### 10. Bill Evans - Waltz for Debby
|
| 138 |
-
- **Duration**: 5-7 minutes
|
| 139 |
-
- **Tempo**: Moderate waltz (140-160 BPM)
|
| 140 |
-
- **Complexity**: Jazz voicings, walking bass, improvisation
|
| 141 |
-
- **Expected Accuracy**: 60-70%
|
| 142 |
-
- **Use For**:
|
| 143 |
-
- Jazz harmony
|
| 144 |
-
- 3/4 time signature
|
| 145 |
-
- Complex chord voicings
|
| 146 |
-
|
| 147 |
-
**Challenges**:
|
| 148 |
-
- Extended chords (7ths, 9ths, 11ths)
|
| 149 |
-
- Improvised passages
|
| 150 |
-
- Swing feel
|
| 151 |
-
|
| 152 |
-
### 11. Oscar Peterson - C Jam Blues
|
| 153 |
-
- **Duration**: 3-4 minutes
|
| 154 |
-
- **Tempo**: Fast (200+ BPM)
|
| 155 |
-
- **Complexity**: Blues progression with virtuosic runs
|
| 156 |
-
- **Expected Accuracy**: 55-65%
|
| 157 |
-
- **Use For**:
|
| 158 |
-
- Fast tempo handling
|
| 159 |
-
- Blues scale
|
| 160 |
-
- Virtuosic passages
|
| 161 |
-
|
| 162 |
-
**Challenges**:
|
| 163 |
-
- Extremely fast notes
|
| 164 |
-
- Grace notes and ornaments
|
| 165 |
-
- Complex rhythm
|
| 166 |
-
|
| 167 |
-
## Complex/Challenging
|
| 168 |
-
|
| 169 |
-
Stress tests for the transcription system.
|
| 170 |
-
|
| 171 |
-
### 12. Flight of the Bumblebee (Piano)
|
| 172 |
-
- **Duration**: 1-2 minutes
|
| 173 |
-
- **Tempo**: Presto (170-200 BPM)
|
| 174 |
-
- **Complexity**: Extremely fast chromatic runs
|
| 175 |
-
- **Expected Accuracy**: 50-60%
|
| 176 |
-
- **Use For**:
|
| 177 |
-
- Stress testing
|
| 178 |
-
- Fast passage detection
|
| 179 |
-
- Chromatic scales
|
| 180 |
-
|
| 181 |
-
**Challenges**:
|
| 182 |
-
- Very fast notes (32nd notes)
|
| 183 |
-
- Chromatic passages
|
| 184 |
-
- Continuous motion
|
| 185 |
-
|
| 186 |
-
### 13. Liszt - La Campanella
|
| 187 |
-
- **Duration**: 4-5 minutes
|
| 188 |
-
- **Tempo**: Allegretto (120 BPM)
|
| 189 |
-
- **Complexity**: Virtuosic with wide leaps and rapid passages
|
| 190 |
-
- **Expected Accuracy**: 55-65%
|
| 191 |
-
- **Use For**:
|
| 192 |
-
- Wide register jumps
|
| 193 |
-
- Repeated notes
|
| 194 |
-
- Virtuosic technique
|
| 195 |
-
|
| 196 |
-
**Challenges**:
|
| 197 |
-
- Octave leaps
|
| 198 |
-
- Repeated staccato notes
|
| 199 |
-
- Ornamentation
|
| 200 |
-
|
| 201 |
-
### 14. Rachmaninoff - Prelude in C# Minor
|
| 202 |
-
- **Duration**: 3-4 minutes
|
| 203 |
-
- **Tempo**: Lento (60 BPM) to Agitato
|
| 204 |
-
- **Complexity**: Dense chords, dramatic dynamics
|
| 205 |
-
- **Expected Accuracy**: 60-70%
|
| 206 |
-
- **Use For**:
|
| 207 |
-
- Heavy chords
|
| 208 |
-
- Dramatic contrasts
|
| 209 |
-
- Multiple voices
|
| 210 |
-
|
| 211 |
-
**Challenges**:
|
| 212 |
-
- 6+ note chords
|
| 213 |
-
- Extreme dynamics
|
| 214 |
-
- Multiple simultaneous voices
|
| 215 |
-
|
| 216 |
-
## Edge Cases
|
| 217 |
-
|
| 218 |
-
Special cases to test error handling and boundaries.
|
| 219 |
-
|
| 220 |
-
### 15. Prepared Piano / Extended Techniques
|
| 221 |
-
- **Use For**: Testing unusual timbres
|
| 222 |
-
- **Expected Accuracy**: 30-50%
|
| 223 |
-
- **Expected Behavior**: Should handle gracefully
|
| 224 |
-
|
| 225 |
-
### 16. Piano with Background Noise
|
| 226 |
-
- **Use For**: Testing source separation quality
|
| 227 |
-
- **Expected Accuracy**: Variable
|
| 228 |
-
- **Expected Behavior**: Should isolate piano reasonably
|
| 229 |
-
|
| 230 |
-
### 17. Poor Audio Quality
|
| 231 |
-
- **Use For**: Testing robustness
|
| 232 |
-
- **Expected Accuracy**: Reduced
|
| 233 |
-
- **Expected Behavior**: Should not crash
|
| 234 |
-
|
| 235 |
-
### 18. Non-Piano Video (Should Fail Gracefully)
|
| 236 |
-
- **Examples**:
|
| 237 |
-
- Drum solo
|
| 238 |
-
- A cappella singing
|
| 239 |
-
- Electronic music
|
| 240 |
-
- **Expected Behavior**: Should complete but with poor results
|
| 241 |
-
|
| 242 |
-
## Testing Criteria
|
| 243 |
-
|
| 244 |
-
### Accuracy Metrics
|
| 245 |
-
|
| 246 |
-
**High Priority (Must Work Well)**:
|
| 247 |
-
- Note pitch accuracy: 85%+ for simple pieces
|
| 248 |
-
- Note onset timing: 80%+ within 50ms
|
| 249 |
-
- Note duration: 70%+ within one quantization unit
|
| 250 |
-
|
| 251 |
-
**Medium Priority (Should Work)**:
|
| 252 |
-
- Key signature detection: 80%+ accuracy
|
| 253 |
-
- Time signature detection: 75%+ accuracy
|
| 254 |
-
- Tempo detection: 70%+ within 10 BPM
|
| 255 |
-
|
| 256 |
-
**Low Priority (Nice to Have)**:
|
| 257 |
-
- Dynamic markings: Not implemented in MVP
|
| 258 |
-
- Articulations: Not implemented in MVP
|
| 259 |
-
- Pedal markings: Not implemented in MVP
|
| 260 |
-
|
| 261 |
-
### Performance Benchmarks
|
| 262 |
-
|
| 263 |
-
| Video Duration | Target Processing Time (GPU) | Max Processing Time (CPU) |
|
| 264 |
-
|---------------|------------------------------|---------------------------|
|
| 265 |
-
| 1 minute | < 30 seconds | < 5 minutes |
|
| 266 |
-
| 3 minutes | < 2 minutes | < 10 minutes |
|
| 267 |
-
| 5 minutes | < 3 minutes | < 15 minutes |
|
| 268 |
-
|
| 269 |
-
### Success Criteria
|
| 270 |
-
|
| 271 |
-
A transcription is considered successful if:
|
| 272 |
-
|
| 273 |
-
1. **Job completes without error**: 95%+ success rate
|
| 274 |
-
2. **Basic pitch accuracy**: 70%+ correct notes for simple pieces, 60%+ for complex
|
| 275 |
-
3. **Playback sounds recognizable**: User can identify the piece
|
| 276 |
-
4. **Usable for editing**: Notation is clean enough to edit and correct
|
| 277 |
-
|
| 278 |
-
### Quality Grades
|
| 279 |
-
|
| 280 |
-
**A (90%+ accuracy)**:
|
| 281 |
-
- Simple melodies
|
| 282 |
-
- Clear recordings
|
| 283 |
-
- Slow to moderate tempo
|
| 284 |
-
- Minimal harmony
|
| 285 |
-
|
| 286 |
-
**B (75-89% accuracy)**:
|
| 287 |
-
- Standard classical pieces
|
| 288 |
-
- Good recordings
|
| 289 |
-
- Moderate tempo
|
| 290 |
-
- Some harmony
|
| 291 |
-
|
| 292 |
-
**C (60-74% accuracy)**:
|
| 293 |
-
- Complex pieces
|
| 294 |
-
- Standard recordings
|
| 295 |
-
- Fast tempo or complex harmony
|
| 296 |
-
- Multiple voices
|
| 297 |
-
|
| 298 |
-
**D (50-59% accuracy)**:
|
| 299 |
-
- Virtuosic pieces
|
| 300 |
-
- Poor recordings
|
| 301 |
-
- Very fast or complex
|
| 302 |
-
- Jazz/improvisation
|
| 303 |
-
|
| 304 |
-
**F (< 50% accuracy)**:
|
| 305 |
-
- Extended techniques
|
| 306 |
-
- Very poor quality
|
| 307 |
-
- Non-piano instruments
|
| 308 |
-
- Extreme complexity
|
| 309 |
-
|
| 310 |
-
## Using Test Videos
|
| 311 |
-
|
| 312 |
-
### Manual Testing
|
| 313 |
-
|
| 314 |
-
1. Submit each video URL through the UI
|
| 315 |
-
2. Wait for processing to complete
|
| 316 |
-
3. Check for errors in each pipeline stage
|
| 317 |
-
4. Download and inspect MusicXML output
|
| 318 |
-
5. Load in MuseScore or similar to verify quality
|
| 319 |
-
6. Note accuracy, timing issues, and artifacts
|
| 320 |
-
|
| 321 |
-
### Automated Testing
|
| 322 |
-
|
| 323 |
-
```python
|
| 324 |
-
# In tests/test_integration.py
|
| 325 |
-
@pytest.mark.parametrize("video_id,expected_grade", [
|
| 326 |
-
("simple_melody", "A"),
|
| 327 |
-
("fur_elise", "B"),
|
| 328 |
-
("jazz_piece", "C"),
|
| 329 |
-
])
|
| 330 |
-
def test_transcription_quality(video_id, expected_grade):
|
| 331 |
-
"""Test transcription quality meets expectations."""
|
| 332 |
-
result = transcribe_video(video_id)
|
| 333 |
-
|
| 334 |
-
assert result['status'] == 'success'
|
| 335 |
-
accuracy = calculate_accuracy(result['musicxml'])
|
| 336 |
-
assert accuracy >= grade_threshold(expected_grade)
|
| 337 |
-
```
|
| 338 |
-
|
| 339 |
-
### Regression Testing
|
| 340 |
-
|
| 341 |
-
Maintain a suite of test videos and track accuracy over time:
|
| 342 |
-
|
| 343 |
-
```bash
|
| 344 |
-
# Run regression test suite
|
| 345 |
-
python scripts/run_regression_tests.py
|
| 346 |
-
|
| 347 |
-
# Compare with baseline
|
| 348 |
-
python scripts/compare_results.py --baseline v1.0.0 --current HEAD
|
| 349 |
-
```
|
| 350 |
-
|
| 351 |
-
## Maintaining Test Collection
|
| 352 |
-
|
| 353 |
-
1. **Add new test cases** when bugs are found
|
| 354 |
-
2. **Update expected accuracy** as system improves
|
| 355 |
-
3. **Remove broken links** and replace with alternatives
|
| 356 |
-
4. **Document edge cases** that reveal system limitations
|
| 357 |
-
5. **Share results** with team to track progress
|
| 358 |
-
|
| 359 |
-
## Test Video Sources
|
| 360 |
-
|
| 361 |
-
When selecting test videos:
|
| 362 |
-
|
| 363 |
-
- β
Use videos with clear audio
|
| 364 |
-
- β
Prefer solo piano recordings
|
| 365 |
-
- β
Choose varied difficulty levels
|
| 366 |
-
- β
Include different musical styles
|
| 367 |
-
- β
Ensure videos are publicly accessible
|
| 368 |
-
- β
Respect copyright and fair use
|
| 369 |
-
- β Avoid videos with talking/commentary
|
| 370 |
-
- β Avoid poor audio quality unless testing robustness
|
| 371 |
-
- β Don't use videos over 15 minutes (MVP limit)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
frontend/Dockerfile
CHANGED
|
@@ -7,7 +7,7 @@ WORKDIR /app
|
|
| 7 |
COPY package*.json ./
|
| 8 |
|
| 9 |
# Install dependencies
|
| 10 |
-
RUN npm install
|
| 11 |
|
| 12 |
# Copy application code
|
| 13 |
COPY . .
|
|
|
|
| 7 |
COPY package*.json ./
|
| 8 |
|
| 9 |
# Install dependencies
|
| 10 |
+
RUN npm install --legacy-peer-deps
|
| 11 |
|
| 12 |
# Copy application code
|
| 13 |
COPY . .
|
frontend/package.json
CHANGED
|
@@ -23,7 +23,7 @@
|
|
| 23 |
"devDependencies": {
|
| 24 |
"@eslint/js": "^9.39.1",
|
| 25 |
"@testing-library/jest-dom": "^6.1.5",
|
| 26 |
-
"@testing-library/react": "^
|
| 27 |
"@testing-library/user-event": "^14.5.1",
|
| 28 |
"@types/node": "^24.10.1",
|
| 29 |
"@types/react": "^19.2.5",
|
|
|
|
| 23 |
"devDependencies": {
|
| 24 |
"@eslint/js": "^9.39.1",
|
| 25 |
"@testing-library/jest-dom": "^6.1.5",
|
| 26 |
+
"@testing-library/react": "^15.0.0",
|
| 27 |
"@testing-library/user-event": "^14.5.1",
|
| 28 |
"@types/node": "^24.10.1",
|
| 29 |
"@types/react": "^19.2.5",
|
frontend/src/components/JobSubmission.css
CHANGED
|
@@ -43,21 +43,42 @@ button:hover {
|
|
| 43 |
|
| 44 |
.progress-container {
|
| 45 |
text-align: center;
|
|
|
|
| 46 |
}
|
| 47 |
|
| 48 |
-
.progress-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
width: 100%;
|
| 50 |
height: 30px;
|
| 51 |
background-color: #f0f0f0;
|
| 52 |
border-radius: 15px;
|
| 53 |
overflow: hidden;
|
| 54 |
-
margin:
|
|
|
|
| 55 |
}
|
| 56 |
|
| 57 |
-
.progress-
|
| 58 |
height: 100%;
|
| 59 |
-
background
|
| 60 |
transition: width 0.3s ease;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
}
|
| 62 |
|
| 63 |
.progress-text {
|
|
|
|
| 43 |
|
| 44 |
.progress-container {
|
| 45 |
text-align: center;
|
| 46 |
+
padding: 2rem;
|
| 47 |
}
|
| 48 |
|
| 49 |
+
.progress-container h2 {
|
| 50 |
+
margin-bottom: 1rem;
|
| 51 |
+
color: #333;
|
| 52 |
+
}
|
| 53 |
+
|
| 54 |
+
.progress-bar-container {
|
| 55 |
width: 100%;
|
| 56 |
height: 30px;
|
| 57 |
background-color: #f0f0f0;
|
| 58 |
border-radius: 15px;
|
| 59 |
overflow: hidden;
|
| 60 |
+
margin: 1.5rem 0;
|
| 61 |
+
border: 1px solid #ddd;
|
| 62 |
}
|
| 63 |
|
| 64 |
+
.progress-bar {
|
| 65 |
height: 100%;
|
| 66 |
+
background: linear-gradient(90deg, #007bff, #0056b3);
|
| 67 |
transition: width 0.3s ease;
|
| 68 |
+
box-shadow: inset 0 2px 4px rgba(0, 0, 0, 0.1);
|
| 69 |
+
}
|
| 70 |
+
|
| 71 |
+
.progress-message {
|
| 72 |
+
color: #555;
|
| 73 |
+
font-size: 1rem;
|
| 74 |
+
margin: 0.5rem 0;
|
| 75 |
+
font-weight: 500;
|
| 76 |
+
}
|
| 77 |
+
|
| 78 |
+
.progress-info {
|
| 79 |
+
color: #888;
|
| 80 |
+
font-size: 0.9rem;
|
| 81 |
+
margin-top: 1rem;
|
| 82 |
}
|
| 83 |
|
| 84 |
.progress-text {
|
frontend/src/components/JobSubmission.tsx
CHANGED
|
@@ -1,8 +1,9 @@
|
|
| 1 |
/**
|
| 2 |
* Job submission form with progress tracking.
|
| 3 |
*/
|
| 4 |
-
import { useState } from 'react';
|
| 5 |
-
import {
|
|
|
|
| 6 |
import './JobSubmission.css';
|
| 7 |
|
| 8 |
interface JobSubmissionProps {
|
|
@@ -12,8 +13,20 @@ interface JobSubmissionProps {
|
|
| 12 |
|
| 13 |
export function JobSubmission({ onComplete, onJobSubmitted }: JobSubmissionProps) {
|
| 14 |
const [youtubeUrl, setYoutubeUrl] = useState('');
|
| 15 |
-
const [status, setStatus] = useState<'idle' | 'submitting' | 'failed'>('idle');
|
| 16 |
const [error, setError] = useState<string | null>(null);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
const validateUrl = (value: string): string | null => {
|
| 19 |
try {
|
|
@@ -38,13 +51,91 @@ export function JobSubmission({ onComplete, onJobSubmitted }: JobSubmissionProps
|
|
| 38 |
setStatus('submitting');
|
| 39 |
|
| 40 |
try {
|
| 41 |
-
const response = await
|
| 42 |
setYoutubeUrl('');
|
| 43 |
if (onJobSubmitted) onJobSubmitted(response);
|
| 44 |
-
if (onComplete) onComplete(response.job_id);
|
| 45 |
|
| 46 |
-
//
|
| 47 |
-
setStatus('
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
} catch (err) {
|
| 49 |
setStatus('failed');
|
| 50 |
setError(err instanceof Error ? err.message : 'Failed to submit job');
|
|
@@ -66,7 +157,7 @@ export function JobSubmission({ onComplete, onJobSubmitted }: JobSubmissionProps
|
|
| 66 |
type="text"
|
| 67 |
value={youtubeUrl}
|
| 68 |
onChange={(e) => setYoutubeUrl(e.target.value)}
|
| 69 |
-
placeholder="
|
| 70 |
required
|
| 71 |
onBlur={() => {
|
| 72 |
const validation = validateUrl(youtubeUrl);
|
|
@@ -80,6 +171,19 @@ export function JobSubmission({ onComplete, onJobSubmitted }: JobSubmissionProps
|
|
| 80 |
</form>
|
| 81 |
)}
|
| 82 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
{status === 'failed' && (
|
| 84 |
<div className="error-message">
|
| 85 |
<h2>β Transcription Failed</h2>
|
|
|
|
| 1 |
/**
|
| 2 |
* Job submission form with progress tracking.
|
| 3 |
*/
|
| 4 |
+
import { useState, useRef, useEffect } from 'react';
|
| 5 |
+
import { api } from '../api/client';
|
| 6 |
+
import type { ProgressUpdate } from '../api/client';
|
| 7 |
import './JobSubmission.css';
|
| 8 |
|
| 9 |
interface JobSubmissionProps {
|
|
|
|
| 13 |
|
| 14 |
export function JobSubmission({ onComplete, onJobSubmitted }: JobSubmissionProps) {
|
| 15 |
const [youtubeUrl, setYoutubeUrl] = useState('');
|
| 16 |
+
const [status, setStatus] = useState<'idle' | 'submitting' | 'processing' | 'failed'>('idle');
|
| 17 |
const [error, setError] = useState<string | null>(null);
|
| 18 |
+
const [progress, setProgress] = useState(0);
|
| 19 |
+
const [progressMessage, setProgressMessage] = useState('');
|
| 20 |
+
const wsRef = useRef<WebSocket | null>(null);
|
| 21 |
+
|
| 22 |
+
// Cleanup WebSocket on unmount
|
| 23 |
+
useEffect(() => {
|
| 24 |
+
return () => {
|
| 25 |
+
if (wsRef.current) {
|
| 26 |
+
wsRef.current.close();
|
| 27 |
+
}
|
| 28 |
+
};
|
| 29 |
+
}, []);
|
| 30 |
|
| 31 |
const validateUrl = (value: string): string | null => {
|
| 32 |
try {
|
|
|
|
| 51 |
setStatus('submitting');
|
| 52 |
|
| 53 |
try {
|
| 54 |
+
const response = await api.submitJob(youtubeUrl, { instruments: ['piano'] });
|
| 55 |
setYoutubeUrl('');
|
| 56 |
if (onJobSubmitted) onJobSubmitted(response);
|
|
|
|
| 57 |
|
| 58 |
+
// Switch to processing status and connect WebSocket
|
| 59 |
+
setStatus('processing');
|
| 60 |
+
setProgress(0);
|
| 61 |
+
setProgressMessage('Starting transcription...');
|
| 62 |
+
|
| 63 |
+
// Connect WebSocket for progress updates
|
| 64 |
+
wsRef.current = api.connectWebSocket(
|
| 65 |
+
response.job_id,
|
| 66 |
+
(update: ProgressUpdate) => {
|
| 67 |
+
if (update.type === 'progress') {
|
| 68 |
+
setProgress(update.progress || 0);
|
| 69 |
+
setProgressMessage(update.message || `Processing: ${update.stage}`);
|
| 70 |
+
} else if (update.type === 'completed') {
|
| 71 |
+
setProgress(100);
|
| 72 |
+
setProgressMessage('Transcription complete!');
|
| 73 |
+
if (wsRef.current) {
|
| 74 |
+
wsRef.current.close();
|
| 75 |
+
wsRef.current = null;
|
| 76 |
+
}
|
| 77 |
+
// Wait a moment to show completion, then switch to editor
|
| 78 |
+
setTimeout(() => {
|
| 79 |
+
if (onComplete) onComplete(response.job_id);
|
| 80 |
+
setStatus('idle');
|
| 81 |
+
}, 500);
|
| 82 |
+
} else if (update.type === 'error') {
|
| 83 |
+
setStatus('failed');
|
| 84 |
+
setError(update.error?.message || 'Transcription failed');
|
| 85 |
+
if (wsRef.current) {
|
| 86 |
+
wsRef.current.close();
|
| 87 |
+
wsRef.current = null;
|
| 88 |
+
}
|
| 89 |
+
}
|
| 90 |
+
},
|
| 91 |
+
(error) => {
|
| 92 |
+
console.error('WebSocket error:', error);
|
| 93 |
+
setStatus('failed');
|
| 94 |
+
setError('Connection error. Please try again.');
|
| 95 |
+
}
|
| 96 |
+
);
|
| 97 |
+
|
| 98 |
+
// Poll for progress updates as fallback (in case WebSocket misses early updates)
|
| 99 |
+
const pollInterval = setInterval(async () => {
|
| 100 |
+
try {
|
| 101 |
+
const jobStatus = await api.getJobStatus(response.job_id);
|
| 102 |
+
setProgress(jobStatus.progress);
|
| 103 |
+
setProgressMessage(jobStatus.status_message || 'Processing...');
|
| 104 |
+
|
| 105 |
+
if (jobStatus.status === 'completed') {
|
| 106 |
+
clearInterval(pollInterval);
|
| 107 |
+
setProgress(100);
|
| 108 |
+
setProgressMessage('Transcription complete!');
|
| 109 |
+
if (wsRef.current) {
|
| 110 |
+
wsRef.current.close();
|
| 111 |
+
wsRef.current = null;
|
| 112 |
+
}
|
| 113 |
+
setTimeout(() => {
|
| 114 |
+
if (onComplete) onComplete(response.job_id);
|
| 115 |
+
setStatus('idle');
|
| 116 |
+
}, 500);
|
| 117 |
+
} else if (jobStatus.status === 'failed') {
|
| 118 |
+
clearInterval(pollInterval);
|
| 119 |
+
setStatus('failed');
|
| 120 |
+
setError(jobStatus.error?.message || 'Transcription failed');
|
| 121 |
+
if (wsRef.current) {
|
| 122 |
+
wsRef.current.close();
|
| 123 |
+
wsRef.current = null;
|
| 124 |
+
}
|
| 125 |
+
}
|
| 126 |
+
} catch (err) {
|
| 127 |
+
console.error('Polling error:', err);
|
| 128 |
+
}
|
| 129 |
+
}, 1000); // Poll every second
|
| 130 |
+
|
| 131 |
+
// Store interval ID for cleanup
|
| 132 |
+
const currentInterval = pollInterval;
|
| 133 |
+
return () => {
|
| 134 |
+
clearInterval(currentInterval);
|
| 135 |
+
if (wsRef.current) {
|
| 136 |
+
wsRef.current.close();
|
| 137 |
+
}
|
| 138 |
+
};
|
| 139 |
} catch (err) {
|
| 140 |
setStatus('failed');
|
| 141 |
setError(err instanceof Error ? err.message : 'Failed to submit job');
|
|
|
|
| 157 |
type="text"
|
| 158 |
value={youtubeUrl}
|
| 159 |
onChange={(e) => setYoutubeUrl(e.target.value)}
|
| 160 |
+
placeholder="https://www.youtube.com/watch?v=..."
|
| 161 |
required
|
| 162 |
onBlur={() => {
|
| 163 |
const validation = validateUrl(youtubeUrl);
|
|
|
|
| 171 |
</form>
|
| 172 |
)}
|
| 173 |
|
| 174 |
+
{status === 'processing' && (
|
| 175 |
+
<div className="progress-container">
|
| 176 |
+
<h2>Transcribing...</h2>
|
| 177 |
+
<div className="progress-bar-container">
|
| 178 |
+
<div className="progress-bar" style={{ width: `${progress}%` }} />
|
| 179 |
+
</div>
|
| 180 |
+
<p className="progress-message">{progress}% - {progressMessage}</p>
|
| 181 |
+
<p className="progress-info">
|
| 182 |
+
This may take 1-2 minutes. Please don't close this window.
|
| 183 |
+
</p>
|
| 184 |
+
</div>
|
| 185 |
+
)}
|
| 186 |
+
|
| 187 |
{status === 'failed' && (
|
| 188 |
<div className="error-message">
|
| 189 |
<h2>β Transcription Failed</h2>
|
start.sh
ADDED
|
@@ -0,0 +1,139 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
|
| 3 |
+
# Rescored Startup Script
|
| 4 |
+
# Starts all services: Redis, Backend API, Celery Worker, and Frontend
|
| 5 |
+
|
| 6 |
+
set -e # Exit on error
|
| 7 |
+
|
| 8 |
+
# Colors for output
|
| 9 |
+
GREEN='\033[0;32m'
|
| 10 |
+
BLUE='\033[0;34m'
|
| 11 |
+
YELLOW='\033[1;33m'
|
| 12 |
+
RED='\033[0;31m'
|
| 13 |
+
NC='\033[0m' # No Color
|
| 14 |
+
|
| 15 |
+
echo -e "${BLUE}======================================${NC}"
|
| 16 |
+
echo -e "${BLUE} Rescored - Starting All Services${NC}"
|
| 17 |
+
echo -e "${BLUE}======================================${NC}"
|
| 18 |
+
echo ""
|
| 19 |
+
|
| 20 |
+
# Check if Redis is running
|
| 21 |
+
echo -e "${YELLOW}Checking Redis...${NC}"
|
| 22 |
+
if ! redis-cli ping > /dev/null 2>&1; then
|
| 23 |
+
echo -e "${YELLOW}Starting Redis service...${NC}"
|
| 24 |
+
brew services start redis
|
| 25 |
+
sleep 2
|
| 26 |
+
if ! redis-cli ping > /dev/null 2>&1; then
|
| 27 |
+
echo -e "${RED}β Failed to start Redis${NC}"
|
| 28 |
+
exit 1
|
| 29 |
+
fi
|
| 30 |
+
fi
|
| 31 |
+
echo -e "${GREEN}β Redis is running${NC}"
|
| 32 |
+
echo ""
|
| 33 |
+
|
| 34 |
+
# Check virtual environment exists
|
| 35 |
+
if [ ! -d "backend/.venv" ]; then
|
| 36 |
+
echo -e "${RED}β Backend virtual environment not found at backend/.venv${NC}"
|
| 37 |
+
echo -e "${YELLOW}Please set up the backend first (see README.md)${NC}"
|
| 38 |
+
exit 1
|
| 39 |
+
fi
|
| 40 |
+
|
| 41 |
+
# Check frontend dependencies
|
| 42 |
+
if [ ! -d "frontend/node_modules" ]; then
|
| 43 |
+
echo -e "${YELLOW}Installing frontend dependencies...${NC}"
|
| 44 |
+
cd frontend
|
| 45 |
+
npm install
|
| 46 |
+
cd ..
|
| 47 |
+
echo -e "${GREEN}β Frontend dependencies installed${NC}"
|
| 48 |
+
echo ""
|
| 49 |
+
fi
|
| 50 |
+
|
| 51 |
+
# Check storage directory
|
| 52 |
+
if [ ! -d "storage" ]; then
|
| 53 |
+
echo -e "${YELLOW}Creating storage directory...${NC}"
|
| 54 |
+
mkdir -p storage
|
| 55 |
+
fi
|
| 56 |
+
|
| 57 |
+
# Check for YouTube cookies
|
| 58 |
+
if [ ! -f "storage/youtube_cookies.txt" ]; then
|
| 59 |
+
echo -e "${YELLOW}β οΈ Warning: YouTube cookies not found at storage/youtube_cookies.txt${NC}"
|
| 60 |
+
echo -e "${YELLOW} You will need to set this up for video downloads to work${NC}"
|
| 61 |
+
echo -e "${YELLOW} See README.md for instructions${NC}"
|
| 62 |
+
echo ""
|
| 63 |
+
fi
|
| 64 |
+
|
| 65 |
+
echo -e "${BLUE}Starting services...${NC}"
|
| 66 |
+
echo -e "${YELLOW}Press Ctrl+C to stop all services${NC}"
|
| 67 |
+
echo ""
|
| 68 |
+
|
| 69 |
+
# Function to cleanup on exit
|
| 70 |
+
cleanup() {
|
| 71 |
+
echo ""
|
| 72 |
+
echo -e "${YELLOW}Stopping all services...${NC}"
|
| 73 |
+
jobs -p | xargs -r kill 2>/dev/null
|
| 74 |
+
echo -e "${GREEN}β All services stopped${NC}"
|
| 75 |
+
exit 0
|
| 76 |
+
}
|
| 77 |
+
|
| 78 |
+
trap cleanup SIGINT SIGTERM
|
| 79 |
+
|
| 80 |
+
# Create logs directory and files
|
| 81 |
+
mkdir -p logs
|
| 82 |
+
rm -f logs/api.log logs/worker.log logs/frontend.log
|
| 83 |
+
touch logs/api.log logs/worker.log logs/frontend.log
|
| 84 |
+
|
| 85 |
+
# Start Backend API
|
| 86 |
+
echo -e "${BLUE}[1/3] Starting Backend API...${NC}"
|
| 87 |
+
cd backend
|
| 88 |
+
source .venv/bin/activate
|
| 89 |
+
uvicorn main:app --host 0.0.0.0 --port 8000 --reload > ../logs/api.log 2>&1 &
|
| 90 |
+
API_PID=$!
|
| 91 |
+
cd ..
|
| 92 |
+
echo -e "${GREEN}β Backend API started (PID: $API_PID)${NC}"
|
| 93 |
+
echo -e " Logs: logs/api.log"
|
| 94 |
+
echo ""
|
| 95 |
+
|
| 96 |
+
# Start Celery Worker
|
| 97 |
+
echo -e "${BLUE}[2/3] Starting Celery Worker...${NC}"
|
| 98 |
+
cd backend
|
| 99 |
+
source .venv/bin/activate
|
| 100 |
+
# Use --pool=solo for macOS to avoid fork() issues with ML libraries
|
| 101 |
+
celery -A tasks worker --loglevel=info --pool=solo > ../logs/worker.log 2>&1 &
|
| 102 |
+
WORKER_PID=$!
|
| 103 |
+
cd ..
|
| 104 |
+
echo -e "${GREEN}β Celery Worker started (PID: $WORKER_PID)${NC}"
|
| 105 |
+
echo -e " Logs: logs/worker.log"
|
| 106 |
+
echo ""
|
| 107 |
+
|
| 108 |
+
# Start Frontend
|
| 109 |
+
echo -e "${BLUE}[3/3] Starting Frontend...${NC}"
|
| 110 |
+
cd frontend
|
| 111 |
+
npm run dev > ../logs/frontend.log 2>&1 &
|
| 112 |
+
FRONTEND_PID=$!
|
| 113 |
+
cd ..
|
| 114 |
+
echo -e "${GREEN}β Frontend started (PID: $FRONTEND_PID)${NC}"
|
| 115 |
+
echo -e " Logs: logs/frontend.log"
|
| 116 |
+
echo ""
|
| 117 |
+
|
| 118 |
+
# Wait a moment for services to start
|
| 119 |
+
sleep 3
|
| 120 |
+
|
| 121 |
+
echo -e "${GREEN}======================================${NC}"
|
| 122 |
+
echo -e "${GREEN} All Services Running!${NC}"
|
| 123 |
+
echo -e "${GREEN}======================================${NC}"
|
| 124 |
+
echo ""
|
| 125 |
+
echo -e "${BLUE}Services:${NC}"
|
| 126 |
+
echo -e " Frontend: ${GREEN}http://localhost:5173${NC}"
|
| 127 |
+
echo -e " Backend: ${GREEN}http://localhost:8000${NC}"
|
| 128 |
+
echo -e " API Docs: ${GREEN}http://localhost:8000/docs${NC}"
|
| 129 |
+
echo ""
|
| 130 |
+
echo -e "${BLUE}Logs:${NC}"
|
| 131 |
+
echo -e " API: tail -f logs/api.log"
|
| 132 |
+
echo -e " Worker: tail -f logs/worker.log"
|
| 133 |
+
echo -e " Frontend: tail -f logs/frontend.log"
|
| 134 |
+
echo ""
|
| 135 |
+
echo -e "${YELLOW}Press Ctrl+C to stop all services${NC}"
|
| 136 |
+
echo ""
|
| 137 |
+
|
| 138 |
+
# Wait for all background processes
|
| 139 |
+
wait
|
stop.sh
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
|
| 3 |
+
# Rescored Stop Script
|
| 4 |
+
# Stops all running Rescored services
|
| 5 |
+
|
| 6 |
+
# Colors for output
|
| 7 |
+
GREEN='\033[0;32m'
|
| 8 |
+
YELLOW='\033[1;33m'
|
| 9 |
+
NC='\033[0m' # No Color
|
| 10 |
+
|
| 11 |
+
echo -e "${YELLOW}Stopping Rescored services...${NC}"
|
| 12 |
+
|
| 13 |
+
# Kill processes by name
|
| 14 |
+
pkill -f "uvicorn main:app" && echo -e "${GREEN}β Stopped Backend API${NC}"
|
| 15 |
+
pkill -f "celery -A tasks worker" && echo -e "${GREEN}β Stopped Celery Worker${NC}"
|
| 16 |
+
pkill -f "vite" && echo -e "${GREEN}β Stopped Frontend${NC}"
|
| 17 |
+
|
| 18 |
+
echo -e "${GREEN}All services stopped${NC}"
|