Spaces:
Running
on
Zero
Running
on
Zero
| # API Test Suite Documentation | |
| ## Overview | |
| The Secure AI Agents Suite includes a comprehensive API testing framework that validates all external service integrations, ensuring your system works correctly with OpenAI, Google ML, ElevenLabs, and Modal platforms. | |
| ## Quick Start | |
| ### 1. Setup Configuration | |
| ```bash | |
| # Copy the configuration template | |
| cp api_test_config.yaml my_config.yaml | |
| # Edit with your API keys | |
| nano my_config.yaml | |
| ``` | |
| ### 2. Run All Tests | |
| ```bash | |
| # Run all API tests | |
| python test_runner.py | |
| # Run specific test | |
| python test_runner.py --test openai | |
| # Use custom config | |
| python test_runner.py --config my_config.yaml | |
| ``` | |
| ### 3. View Results | |
| ```bash | |
| # View summary | |
| python test_runner.py --show-summary | |
| # Validate config only | |
| python test_runner.py --validate-only | |
| ``` | |
| ## Configuration | |
| ### API Keys Required | |
| Edit `api_test_config.yaml` with your actual API keys: | |
| ```yaml | |
| # OpenAI Configuration | |
| openai: | |
| api_key: "sk-your-openai-api-key-here" | |
| base_url: "https://api.openai.com/v1" | |
| timeout: 30 | |
| rate_limit: 100 | |
| # Google ML Configuration | |
| google: | |
| api_key: "your-google-api-key-here" | |
| base_url: "https://generativelanguage.googleapis.com/v1" | |
| timeout: 30 | |
| rate_limit: 100 | |
| # ElevenLabs Configuration | |
| elevenlabs: | |
| api_key: "your-elevenlabs-api-key-here" | |
| base_url: "https://api.elevenlabs.io/v1" | |
| timeout: 60 | |
| rate_limit: 100 | |
| # Modal Configuration | |
| modal: | |
| api_key: "your-modal-api-key-here" | |
| base_url: "https://modal.com/api" | |
| timeout: 30 | |
| rate_limit: 100 | |
| ``` | |
| ### Test Configuration | |
| Customize test behavior: | |
| ```yaml | |
| test_config: | |
| # Minimum success rate to consider tests passing (percentage) | |
| success_rate_threshold: 80 | |
| # Timeout for individual requests (seconds) | |
| request_timeout: 30 | |
| # Maximum retries for failed requests | |
| max_retries: 3 | |
| # Performance thresholds (in milliseconds) | |
| performance_thresholds: | |
| openai_text_generation: 5000 | |
| google_text_generation: 5000 | |
| elevenlabs_tts: 10000 | |
| modal_execution: 30000 | |
| ``` | |
| ## Test Categories | |
| ### OpenAI Tests | |
| - **Connection Test**: Validates API connectivity and authentication | |
| - **Text Generation Test**: Tests GPT models with sample prompts | |
| - **Batch Processing Test**: Validates multiple concurrent requests | |
| ### Google ML Tests | |
| - **Connection Test**: Validates Google Generative AI API access | |
| - **Text Generation Test**: Tests various Google AI models | |
| ### ElevenLabs Tests | |
| - **Connection Test**: Validates voice API connectivity | |
| - **Text-to-Speech Test**: Tests audio generation capabilities | |
| - **Voice Cloning Test**: Validates custom voice functionality | |
| ### Modal Tests | |
| - **Connection Test**: Validates Modal platform access | |
| - **Function Deployment Test**: Tests serverless function deployment | |
| ## Usage Examples | |
| ### Basic Usage | |
| ```bash | |
| # Run all tests with default settings | |
| python test_runner.py | |
| # Run only OpenAI tests | |
| python test_runner.py --test openai | |
| # Run with custom config and output file | |
| python test_runner.py --config production_config.yaml --output results_production.json | |
| ``` | |
| ### CI/CD Integration | |
| ```bash | |
| # Use in automated pipelines | |
| python test_runner.py --config ci_config.yaml --output ci_results.json | |
| # Check for configuration errors | |
| python test_runner.py --validate-only | |
| ``` | |
| ### Performance Monitoring | |
| ```bash | |
| # Run all tests and save detailed results | |
| python test_runner.py --output performance_report_$(date +%Y%m%d).json | |
| # Run specific performance tests | |
| python test_runner.py --test elevenlabs | |
| ``` | |
| ## Results Format | |
| Test results are saved in JSON format with the following structure: | |
| ```json | |
| { | |
| "summary": { | |
| "timestamp": "2025-11-30T09:49:17.197Z", | |
| "total_tests": 12, | |
| "passed_tests": 11, | |
| "failed_tests": 1, | |
| "success_rate": 91.7, | |
| "total_duration": 45.23, | |
| "api_availability": { | |
| "openai": true, | |
| "google": true, | |
| "elevenlabs": true, | |
| "modal": true | |
| } | |
| }, | |
| "test_results": { | |
| "openai_connection": { | |
| "success": true, | |
| "duration": 1.23, | |
| "message": "Connection successful" | |
| }, | |
| "openai_text_generation": { | |
| "success": true, | |
| "duration": 2.45, | |
| "message": "Text generation successful", | |
| "performance_score": 85 | |
| } | |
| }, | |
| "performance_metrics": { | |
| "average_response_time": 2.1, | |
| "success_rate": 91.7, | |
| "api_uptime": 100.0 | |
| } | |
| } | |
| ``` | |
| ## Interpreting Results | |
| ### Success Indicators | |
| - **Success Rate**: Aim for >80% for production readiness | |
| - **Response Time**: Should be within configured thresholds | |
| - **API Availability**: All APIs should be accessible | |
| ### Performance Thresholds | |
| - **OpenAI Text Generation**: <5 seconds | |
| - **Google Text Generation**: <5 seconds | |
| - **ElevenLabs TTS**: <10 seconds | |
| - **Modal Execution**: <30 seconds | |
| ### Troubleshooting | |
| #### Common Issues | |
| **1. API Key Errors** | |
| ```bash | |
| # Check configuration | |
| python test_runner.py --validate-only | |
| # Verify keys in configuration file | |
| cat api_test_config.yaml | grep api_key | |
| ``` | |
| **2. Network Connectivity** | |
| ```bash | |
| # Test individual API connectivity | |
| python test_runner.py --test openai | |
| # Check internet connection | |
| ping api.openai.com | |
| ``` | |
| **3. Rate Limiting** | |
| - Reduce `rate_limit` in configuration | |
| - Implement exponential backoff | |
| - Monitor API usage quotas | |
| **4. Performance Issues** | |
| - Check network latency | |
| - Verify API service status | |
| - Review configuration timeouts | |
| ## Best Practices | |
| ### Security | |
| - Never commit API keys to version control | |
| - Use environment variables for sensitive data | |
| - Regularly rotate API keys | |
| - Monitor API usage for anomalies | |
| ### Monitoring | |
| - Run tests regularly (daily/weekly) | |
| - Track performance trends over time | |
| - Set up alerts for test failures | |
| - Monitor API rate limits and quotas | |
| ### CI/CD Integration | |
| ```bash | |
| # In your CI pipeline | |
| - name: API Tests | |
| run: | | |
| python test_runner.py --config ci_config.yaml --output test_results.json | |
| if [ $(jq '.summary.success_rate' test_results.json) -lt 80 ]; then | |
| echo "API tests failed" | |
| exit 1 | |
| fi | |
| ``` | |
| ## Advanced Usage | |
| ### Custom Test Cases | |
| Extend the test suite by adding custom test methods in `api_test_suite.py`: | |
| ```python | |
| def test_custom_functionality(self): | |
| """Test custom API functionality""" | |
| try: | |
| # Your custom test logic here | |
| result = self.make_request("your_custom_endpoint") | |
| return { | |
| 'success': True, | |
| 'duration': result['duration'], | |
| 'message': 'Custom test passed' | |
| } | |
| except Exception as e: | |
| return { | |
| 'success': False, | |
| 'error': str(e), | |
| 'duration': 0 | |
| } | |
| ``` | |
| ### Performance Monitoring | |
| Track API performance over time: | |
| ```python | |
| # Add to your monitoring script | |
| import json | |
| from datetime import datetime | |
| def track_performance(results_file): | |
| with open(results_file, 'r') as f: | |
| results = json.load(f) | |
| metrics = { | |
| 'timestamp': datetime.now().isoformat(), | |
| 'success_rate': results['summary']['success_rate'], | |
| 'avg_response_time': results['performance_metrics']['average_response_time'] | |
| } | |
| # Save to performance database | |
| save_metrics(metrics) | |
| ``` | |
| ## Support | |
| For issues or questions: | |
| 1. Check the troubleshooting section above | |
| 2. Review API service status pages | |
| 3. Validate your configuration using `--validate-only` | |
| 4. Run individual tests to isolate problems | |
| --- | |
| **Last Updated**: November 30, 2025 | |
| **Version**: 1.0.0 | |
| **Compatibility**: Python 3.8+ |