A newer version of the Gradio SDK is available: 6.14.0
π STLC-AI Deployment Guide
Complete step-by-step instructions for deploying your GenAI Test Automation demo to Hugging Face Spaces.
π Pre-Deployment Checklist
Before starting deployment, ensure you have:
- Hugging Face account created (Sign up here)
- Git installed on your local machine
- All project files ready and tested locally
- OpenAI API key (optional, for production features)
- Project documentation completed
π― Step 1: Prepare Your Hugging Face Space
1.1 Create New Space
- Navigate to Hugging Face Spaces
- Click "Create new Space"
- Fill in the details:
- Space name:
stlc-ai-demo(or your preferred name) - License:
MIT - SDK: Select "Gradio"
- Visibility: "Public" (recommended for demo)
- Hardware: "CPU basic" (sufficient for this demo)
- Space name:
1.2 Initialize Repository
# Clone your new space repository
git clone https://huggingface.co/spaces/YOUR_USERNAME/stlc-ai-demo
cd stlc-ai-demo
# Verify you're in the correct directory
ls -la
# Should show .git folder and possibly README.md
ποΈ Step 2: Upload Project Files
2.1 Copy All Project Files
Copy the following files to your space directory:
# Copy main application files
cp /path/to/your/project/app.py .
cp /path/to/your/project/utils.py .
cp /path/to/your/project/requirements.txt .
cp /path/to/your/project/README.md .
# Copy configuration and data files
cp /path/to/your/project/prompts.yaml .
cp /path/to/your/project/dummy_user_stories.json .
cp /path/to/your/project/test_log_samples.json .
# Optional: Environment file template
cp /path/to/your/project/.env.example .
2.2 Verify File Structure
ls -la
# Should show:
# app.py
# utils.py
# requirements.txt
# README.md
# prompts.yaml
# dummy_user_stories.json
# test_log_samples.json
# .git/
2.3 Test Files Locally (Optional)
# Quick test to ensure everything works
python app.py
# Should launch Gradio interface on localhost:7860
π Step 3: Configure Space Settings
3.1 Create/Update README.md
Ensure your README.md has the proper front matter for Hugging Face:
---
title: STLC-AI Demo
emoji: π€
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 4.8.0
app_file: app.py
pinned: false
license: mit
tags:
- artificial-intelligence
- testing
- insurance
- automation
- qa
- gradio
- demo
---
3.2 Optimize requirements.txt
Ensure your requirements.txt includes version constraints:
gradio>=4.8.0,<5.0.0
openai>=1.3.0,<2.0.0
pyyaml>=6.0,<7.0
python-dotenv>=1.0.0,<2.0.0
pytest>=7.4.0,<8.0.0
requests>=2.31.0,<3.0.0
pandas>=2.0.0,<3.0.0
numpy>=1.24.0,<2.0.0
jinja2>=3.1.0,<4.0.0
π Step 4: Environment Variables (Optional)
4.1 Set up OpenAI API (Production Mode)
If you want to use real OpenAI API instead of mock responses:
- Go to your Space settings on Hugging Face
- Navigate to "Repository secrets"
- Add new secret:
- Name:
OPENAI_API_KEY - Value: Your OpenAI API key
- Visibility: Keep it private
- Name:
4.2 Update Code for Production
Modify utils.py to use real API when available:
import os
from openai import OpenAI
# Check if running in production with API key
if os.getenv("OPENAI_API_KEY"):
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
USE_REAL_API = True
else:
USE_REAL_API = False
def simulate_llm_call(prompt: str, context: Dict[str, Any] = None) -> str:
if USE_REAL_API:
# Use real OpenAI API
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}],
max_tokens=1500,
temperature=0.7
)
return response.choices[0].message.content
else:
# Use mock responses for demo
return generate_mock_response(prompt, context)
π€ Step 5: Deploy to Hugging Face
5.1 Commit and Push Files
# Add all files to git
git add .
# Commit with descriptive message
git commit -m "Initial deployment: STLC-AI GenAI Test Automation Demo
- Complete Gradio application with interactive UI
- 12 sample insurance domain user stories
- AI-powered BDD generation and test script creation
- Realistic test execution simulation
- Intelligent defect analysis and reporting
- Export functionality for test results
- Comprehensive documentation and examples"
# Push to Hugging Face
git push origin main
5.2 Monitor Deployment
- Go to your Space URL:
https://huggingface.co/spaces/YOUR_USERNAME/stlc-ai-demo - Watch the "Building" status in the top banner
- Check the "Logs" tab for any build issues
- Wait for "Running" status (usually 2-5 minutes)
5.3 Troubleshoot Common Issues
Build Failed - Dependencies
# Check logs for specific package conflicts
# Update requirements.txt with compatible versions
# Common fixes:
gradio==4.8.0 # Pin specific version
numpy<2.0.0 # Avoid v2.0 compatibility issues
Build Failed - File Not Found
# Ensure all files are committed
git status
git add missing_file.py
git commit -m "Add missing file"
git push origin main
Runtime Error - Import Issues
# Check app.py imports
# Ensure all modules are in requirements.txt
# Verify file paths are relative
β Step 6: Post-Deployment Verification
6.1 Test Core Functionality
- Open your Space URL
- Select a user story from the dropdown
- Click "Start Test Lifecycle"
- Verify each stage processes correctly:
- β BDD scenario generation
- β Test script creation
- β Execution simulation
- β Defect summary (for failures)
- Test export functionality
6.2 Performance Check
- Response time: Should be <10 seconds per stage
- UI responsiveness: All components should load quickly
- Mobile compatibility: Test on mobile device
- Error handling: Try invalid inputs
6.3 Update Space Description
- Go to your Space settings
- Update the description:
π€ AI-powered Software Test Life Cycle automation demo for insurance systems.
Features:
β’ Convert user stories to BDD scenarios
β’ Generate Python test scripts automatically
β’ Simulate test execution with realistic results
β’ Provide intelligent defect analysis
β’ Export comprehensive test documentation
Perfect demonstration of GenAI in QA automation!
π Step 7: Space Optimization
7.1 Performance Optimization
# In app.py, optimize for Spaces environment
demo = gr.Blocks(
theme=gr.themes.Soft(),
title="STLC-AI: GenAI Test Automation",
css="Custom CSS here...",
analytics_enabled=False, # Disable for faster loading
show_error=True # Show helpful errors
)
# Optimize launch parameters
if __name__ == "__main__":
demo.launch(
share=True,
server_name="0.0.0.0",
server_port=7860,
show_error=True,
quiet=False
)
7.2 Add Analytics (Optional)
# Track usage with simple analytics
import time
from datetime import datetime
def log_usage(action, user_story_id=None):
timestamp = datetime.now().isoformat()
print(f"[ANALYTICS] {timestamp}: {action} - Story: {user_story_id}")
7.3 SEO and Discoverability
Update your Space's metadata:
- Add relevant tags:
artificial-intelligence,testing,automation - Create engaging thumbnail: Upload a custom image
- Write compelling description: Highlight key benefits
- Pin to profile: Make it easily discoverable
π Step 8: Share Your Demo
8.1 Test the Public URL
- Share with colleagues:
https://huggingface.co/spaces/YOUR_USERNAME/stlc-ai-demo - Test from different devices and networks
- Verify all functionality works in production
8.2 Create Marketing Materials
## π Just Launched: STLC-AI Demo!
Experience the future of QA automation with AI:
β’ Upload insurance user stories
β’ Watch AI generate BDD scenarios
β’ Get complete Python test scripts
β’ See intelligent defect analysis
Try it live: [Your Space URL]
8.3 Gather Feedback
- Add feedback mechanism in your app
- Monitor Space analytics and usage
- Iterate based on user feedback
π§ Maintenance & Updates
Update Process
# Pull latest changes
git pull origin main
# Make your updates
# ... edit files ...
# Test locally
python app.py
# Deploy updates
git add .
git commit -m "Update: [description of changes]"
git push origin main
Monitoring
- Check Space status regularly
- Monitor build logs for warnings
- Track usage analytics
- Update dependencies periodically
π Troubleshooting Guide
Common Issues and Solutions
Space Not Loading
# Check build logs in HF interface
# Common issues:
# 1. Requirements conflicts
# 2. File path errors
# 3. Port binding issues
Slow Performance
# Optimize large data files
# Reduce model complexity
# Cache common responses
# Upgrade to better hardware tier
API Quotas Exceeded
# Implement rate limiting
# Add error handling for API failures
# Fall back to mock responses
π― Success Checklist
- Space builds successfully
- All UI components load properly
- User story selection works
- BDD generation displays correctly
- Test script generation works
- Execution simulation runs
- Defect summaries appear for failures
- Export functionality works
- Mobile responsiveness verified
- Error handling tested
- Performance acceptable (<10s response)
- Documentation complete
- Space description updated
- Tags and metadata configured
π Congratulations! Your STLC-AI demo is now live and ready to showcase the power of GenAI in test automation!