File size: 8,607 Bytes
e7bf1e6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 |
# Backend Testing Guide
## Overview
The backend test suite ensures the reliability of the audio processing pipeline, API endpoints, Celery tasks, and utility functions. All tests are written using pytest and can be run locally or in CI/CD pipelines.
## Test Structure
```
backend/tests/
βββ conftest.py # Shared fixtures and test configuration
βββ test_api.py # API endpoint tests (21 tests)
βββ test_pipeline.py # Pipeline component tests (14 tests)
βββ test_tasks.py # Celery task tests (9 tests)
βββ test_utils.py # Utility function tests (15 tests)
```
**Total: 59 tests, 27% code coverage**
## Running Tests
### Quick Start
```bash
cd backend
source .venv/bin/activate
# Run all tests
pytest
# Run with coverage report
pytest --cov=. --cov-report=html
# Run specific test file
pytest tests/test_api.py
# Run specific test
pytest tests/test_api.py::TestRootEndpoint::test_root
# Run with verbose output
pytest -v
# Run with short traceback on failures
pytest --tb=short
```
### Test Categories
**API Tests** (`test_api.py`):
- Root endpoint
- Health check (Redis connectivity)
- Transcription submission (validation, rate limiting)
- Job status queries
- Score/MIDI downloads
**Pipeline Tests** (`test_pipeline.py`):
- Function imports and callability
- Pipeline class instantiation
- Required method availability
- Progress callback functionality
**Task Tests** (`test_tasks.py`):
- Celery task execution
- Progress updates
- Error handling and retries
- Job not found scenarios
- Temp file cleanup
**Utility Tests** (`test_utils.py`):
- YouTube URL validation
- Video availability checks
- Error handling for invalid inputs
## Writing Tests
### Test Fixtures
Common fixtures are defined in `conftest.py`:
```python
# Temporary storage directory
def temp_storage_dir():
"""Create temporary storage directory for tests."""
# Mock Redis client
def mock_redis():
"""Mock Redis client for testing."""
# FastAPI test client
def test_client(mock_redis, temp_storage_dir):
"""Create FastAPI test client with mocked dependencies."""
# Sample job data
def sample_job_id():
"""Generate a sample job ID for testing."""
def sample_job_data(sample_job_id):
"""Sample job data for testing."""
# Sample media files
def sample_audio_file(temp_storage_dir):
"""Create a sample WAV file for testing."""
def sample_midi_file(temp_storage_dir):
"""Create a sample MIDI file for testing."""
def sample_musicxml_content():
"""Sample MusicXML content for testing."""
```
### Example Test
```python
import pytest
from unittest.mock import patch, MagicMock
class TestTranscriptionPipeline:
"""Test the transcription pipeline."""
@patch('pipeline.TranscriptionPipeline')
def test_pipeline_runs_successfully(
self,
mock_pipeline,
temp_storage_dir
):
"""Test successful pipeline execution."""
# Setup mock
mock_instance = MagicMock()
mock_instance.run.return_value = str(temp_storage_dir / "output.musicxml")
mock_pipeline.return_value = mock_instance
# Execute
result = mock_instance.run()
# Assert
assert result.endswith("output.musicxml")
mock_instance.run.assert_called_once()
```
### Mocking Best Practices
**1. Mock External Dependencies**
Always mock:
- Redis connections
- File system operations (when testing logic, not I/O)
- External API calls (yt-dlp, YourMT3+ service)
- Time-dependent operations
**2. Use Proper Patch Targets**
Patch at the point of import, not the definition:
```python
# CORRECT - patch where it's imported
@patch('main.validate_youtube_url')
# WRONG - patch at definition
@patch('app_utils.validate_youtube_url')
```
**3. Create Real Files for Integration Tests**
When testing file operations, create real temp files:
```python
def test_midi_processing(temp_storage_dir):
midi_file = temp_storage_dir / "test.mid"
midi_file.write_bytes(b"MThd...") # Create real file
result = process_midi(midi_file)
assert result.exists()
```
## Test Coverage
Current coverage by module:
| Module | Coverage | Notes |
|--------|----------|-------|
| app_config.py | 92% | Configuration loading |
| app_utils.py | 100% | URL validation, video checks |
| main.py | 55% | API endpoints (some error paths untested) |
| tasks.py | 91% | Celery task execution |
| pipeline.py | 5% | Needs integration tests with real ML models |
| tests/*.py | 100% | Test code itself |
**Note**: Low pipeline.py coverage is expected since it requires ML models and GPU. Integration tests should be run separately with real hardware.
## Continuous Integration
### GitHub Actions Example
```yaml
name: Backend Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
redis:
image: redis:7-alpine
ports:
- 6379:6379
steps:
- uses: actions/checkout@v3
- name: Set up Python 3.10
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies
run: |
cd backend
pip install -r requirements.txt
- name: Run tests
run: |
cd backend
pytest --cov=. --cov-report=xml
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
file: ./backend/coverage.xml
```
## Common Testing Patterns
### Testing API Endpoints
```python
def test_submit_transcription(test_client, mock_redis):
"""Test transcription submission."""
response = test_client.post(
"/api/v1/transcribe",
json={"youtube_url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ"}
)
assert response.status_code == 201
assert "job_id" in response.json()
```
### Testing Celery Tasks
```python
@patch('tasks.TranscriptionPipeline')
@patch('tasks.redis_client')
def test_task_execution(mock_redis, mock_pipeline):
"""Test Celery task executes successfully."""
from tasks import process_transcription_task
# Setup
mock_redis.hgetall.return_value = {
'job_id': 'test-123',
'youtube_url': 'https://youtube.com/watch?v=test'
}
# Execute
process_transcription_task('test-123')
# Verify
assert mock_pipeline.called
```
### Testing File Operations
```python
def test_file_cleanup(temp_storage_dir):
"""Test temporary files are cleaned up."""
temp_file = temp_storage_dir / "temp.wav"
temp_file.write_bytes(b"test")
cleanup_temp_files(temp_storage_dir)
assert not temp_file.exists()
```
## Troubleshooting Tests
### Common Issues
**1. Import Errors**
```bash
# Make sure you're in the venv
source .venv/bin/activate
# Verify pytest is installed
pytest --version
```
**2. Redis Connection Errors**
Tests mock Redis by default. If you see connection errors:
```python
# Check conftest.py has mock_redis fixture
# Ensure test uses the fixture:
def test_something(mock_redis): # Add this parameter
...
```
**3. File Permission Errors**
Temp directories should be writable:
```python
# Use the temp_storage_dir fixture
def test_something(temp_storage_dir):
file_path = temp_storage_dir / "test.txt"
file_path.write_text("content")
```
**4. Async Test Errors**
For async tests, use pytest-asyncio:
```python
import pytest
@pytest.mark.asyncio
async def test_async_function():
result = await some_async_function()
assert result is not None
```
## Test Performance
**Running all tests**: ~5-10 seconds
- API tests: ~2 seconds
- Pipeline tests: <1 second
- Task tests: ~2 seconds
- Utils tests: <1 second
**Tips for faster tests**:
- Mock expensive operations (ML inference, file I/O)
- Use `pytest -n auto` for parallel execution (requires pytest-xdist)
- Run specific test files during development
## Future Improvements
**Needed Tests**:
1. Integration tests with real YourMT3+ model
2. End-to-end tests with actual YouTube videos
3. Performance benchmarks
4. Load testing for concurrent jobs
5. WebSocket connection tests
6. MIDI quantization edge cases
7. MusicXML generation validation
**Coverage Goals**:
- Increase pipeline.py to 40% (integration tests)
- Increase main.py to 80% (all error paths)
- Add performance regression tests
## References
- [pytest Documentation](https://docs.pytest.org/)
- [pytest-asyncio](https://pytest-asyncio.readthedocs.io/)
- [unittest.mock](https://docs.python.org/3/library/unittest.mock.html)
- [FastAPI Testing](https://fastapi.tiangolo.com/tutorial/testing/)
|