Spaces:
Sleeping
Sleeping
Parimal Kalpande commited on
Commit ·
db70c95
1
Parent(s): 5fbd6f8
deploy
Browse files- .dockerignore +9 -0
- .gitignore +29 -0
- DEPLOYMENT.md +156 -0
- DOCKERFILE +25 -0
- README.md +100 -6
- README_HF.md +16 -0
- app.py +354 -0
- app_readme.md +61 -0
- config.py +54 -0
- deploy.sh +71 -0
- modules/__init__.py +0 -0
- modules/doc_processor.py +29 -0
- modules/llm_handler.py +385 -0
- modules/pm_frameworks.py +161 -0
- modules/report_generator.py +443 -0
- modules/stt_handler.py +87 -0
- modules/tts_handler.py +62 -0
- modules/web_search.py +33 -0
- packages.txt +2 -0
- requirements.txt +12 -0
- test_deployment.py +138 -0
- test_report.py +39 -0
- voice_model/en_US-lessac-medium.onnx +3 -0
- voice_model/en_US-lessac-medium.onnx.json +493 -0
.dockerignore
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
__pycache__/
|
| 2 |
+
*.pyc
|
| 3 |
+
*.pyo
|
| 4 |
+
*.pyd
|
| 5 |
+
venv/
|
| 6 |
+
reports/
|
| 7 |
+
uploads/
|
| 8 |
+
.git/
|
| 9 |
+
.env
|
.gitignore
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Python virtual environment
|
| 2 |
+
venv/
|
| 3 |
+
/venv/
|
| 4 |
+
__pycache__/
|
| 5 |
+
*.pyc
|
| 6 |
+
*.pyo
|
| 7 |
+
*.pyd
|
| 8 |
+
|
| 9 |
+
# User-specific files
|
| 10 |
+
uploads/
|
| 11 |
+
reports/
|
| 12 |
+
|
| 13 |
+
# Configuration and secrets
|
| 14 |
+
# IMPORTANT: This prevents your API keys from being uploaded to GitHub
|
| 15 |
+
.env
|
| 16 |
+
|
| 17 |
+
# IDE and editor files
|
| 18 |
+
.vscode/
|
| 19 |
+
.idea/
|
| 20 |
+
*.swp
|
| 21 |
+
*.swo
|
| 22 |
+
|
| 23 |
+
# OS-specific files
|
| 24 |
+
.DS_Store
|
| 25 |
+
Thumbs.db
|
| 26 |
+
|
| 27 |
+
# Matplotlib cache
|
| 28 |
+
*.png
|
| 29 |
+
!/path/to/keep/some/images/
|
DEPLOYMENT.md
ADDED
|
@@ -0,0 +1,156 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🚀 Deploying AI Product Coach to Hugging Face Spaces
|
| 2 |
+
|
| 3 |
+
## Step-by-Step Deployment Guide
|
| 4 |
+
|
| 5 |
+
### 1. Prepare Your Repository
|
| 6 |
+
|
| 7 |
+
Make sure you have these files in your project:
|
| 8 |
+
- `app.py` - Main application file
|
| 9 |
+
- `requirements.txt` - Python dependencies
|
| 10 |
+
- `README.md` - Project documentation (with HF metadata header)
|
| 11 |
+
- `config.py` - Configuration file
|
| 12 |
+
- `modules/` - All module files
|
| 13 |
+
- `voice_model/` - TTS model files
|
| 14 |
+
|
| 15 |
+
### 2. Create Hugging Face Space
|
| 16 |
+
|
| 17 |
+
1. Go to [huggingface.co/spaces](https://huggingface.co/spaces)
|
| 18 |
+
2. Click "Create new Space"
|
| 19 |
+
3. Fill in the details:
|
| 20 |
+
- **Space name**: `ai-product-coach` (or your choice)
|
| 21 |
+
- **License**: MIT
|
| 22 |
+
- **Select the SDK**: Gradio
|
| 23 |
+
- **Hardware**: CPU (free tier is sufficient)
|
| 24 |
+
- **Visibility**: Public
|
| 25 |
+
|
| 26 |
+
### 3. Upload Files
|
| 27 |
+
|
| 28 |
+
**Option A: Git Upload (Recommended)**
|
| 29 |
+
```bash
|
| 30 |
+
# Clone your new space
|
| 31 |
+
git clone https://huggingface.co/spaces/YOUR_USERNAME/ai-product-coach
|
| 32 |
+
cd ai-product-coach
|
| 33 |
+
|
| 34 |
+
# Copy all your files to this directory
|
| 35 |
+
cp -r /path/to/your/project/* .
|
| 36 |
+
|
| 37 |
+
# Add the HF metadata to README.md (first lines)
|
| 38 |
+
---
|
| 39 |
+
title: AI Product Coach
|
| 40 |
+
emoji: 🎯
|
| 41 |
+
colorFrom: blue
|
| 42 |
+
colorTo: purple
|
| 43 |
+
sdk: gradio
|
| 44 |
+
sdk_version: 4.44.0
|
| 45 |
+
app_file: app.py
|
| 46 |
+
pinned: false
|
| 47 |
+
license: mit
|
| 48 |
+
---
|
| 49 |
+
|
| 50 |
+
# Push to HuggingFace
|
| 51 |
+
git add .
|
| 52 |
+
git commit -m "Initial deployment of AI Product Coach"
|
| 53 |
+
git push
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
**Option B: Web Upload**
|
| 57 |
+
- Use the HuggingFace web interface to drag and drop files
|
| 58 |
+
- Make sure to upload all directories and files
|
| 59 |
+
|
| 60 |
+
### 4. Configure Environment Variables
|
| 61 |
+
|
| 62 |
+
1. Go to your Space settings
|
| 63 |
+
2. Navigate to "Variables and secrets"
|
| 64 |
+
3. Add these environment variables:
|
| 65 |
+
- **Name**: `GROQ_API_KEY`
|
| 66 |
+
- **Value**: Your Groq API key from [console.groq.com](https://console.groq.com/keys)
|
| 67 |
+
|
| 68 |
+
### 5. Required File Structure
|
| 69 |
+
|
| 70 |
+
Your Hugging Face Space should have:
|
| 71 |
+
```
|
| 72 |
+
├── app.py
|
| 73 |
+
├── requirements.txt
|
| 74 |
+
├── README.md (with metadata header)
|
| 75 |
+
├── config.py
|
| 76 |
+
├── modules/
|
| 77 |
+
│ ├── __init__.py
|
| 78 |
+
│ ├── doc_processor.py
|
| 79 |
+
│ ├── llm_handler.py
|
| 80 |
+
│ ├── report_generator.py
|
| 81 |
+
│ ├── stt_handler.py
|
| 82 |
+
│ ├── tts_handler.py
|
| 83 |
+
│ └── web_search.py
|
| 84 |
+
└── voice_model/
|
| 85 |
+
├── en_US-lessac-medium.onnx
|
| 86 |
+
└── en_US-lessac-medium.onnx.json
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
### 6. Key Changes for HuggingFace
|
| 90 |
+
|
| 91 |
+
✅ **Already Done:**
|
| 92 |
+
- Removed `pyaudio` from requirements (not compatible with HF)
|
| 93 |
+
- Updated app launch configuration to detect HF environment
|
| 94 |
+
- Added proper metadata header format
|
| 95 |
+
- Configured for 0.0.0.0 hosting when in HF environment
|
| 96 |
+
|
| 97 |
+
### 7. Testing & Debugging
|
| 98 |
+
|
| 99 |
+
1. **Check Build Logs**: Monitor the build process in your Space
|
| 100 |
+
2. **Test Features**:
|
| 101 |
+
- Text input/output
|
| 102 |
+
- File upload (resume)
|
| 103 |
+
- PDF report generation
|
| 104 |
+
3. **Audio Features**: Note that audio recording may have limitations in web browsers
|
| 105 |
+
|
| 106 |
+
### 8. Post-Deployment
|
| 107 |
+
|
| 108 |
+
1. **Share Your Space**: Get the URL `https://huggingface.co/spaces/YOUR_USERNAME/SPACE_NAME`
|
| 109 |
+
2. **Add to Portfolio**: Include in your projects and CV
|
| 110 |
+
3. **Community**: Consider adding to HF collections or model cards
|
| 111 |
+
|
| 112 |
+
## 🔧 Troubleshooting
|
| 113 |
+
|
| 114 |
+
### Common Issues:
|
| 115 |
+
|
| 116 |
+
**Build Failures:**
|
| 117 |
+
- Check requirements.txt for incompatible packages
|
| 118 |
+
- Verify all file paths are relative
|
| 119 |
+
- Ensure no missing imports
|
| 120 |
+
|
| 121 |
+
**API Errors:**
|
| 122 |
+
- Verify GROQ_API_KEY is set correctly
|
| 123 |
+
- Check API quota limits
|
| 124 |
+
- Test API key in local environment first
|
| 125 |
+
|
| 126 |
+
**Audio Issues:**
|
| 127 |
+
- Audio recording requires HTTPS (HF provides this)
|
| 128 |
+
- Some browsers may block microphone access
|
| 129 |
+
- Fallback to text input is always available
|
| 130 |
+
|
| 131 |
+
**Memory Issues:**
|
| 132 |
+
- Consider using CPU-optimized models
|
| 133 |
+
- Implement proper cleanup in functions
|
| 134 |
+
- Monitor Space resource usage
|
| 135 |
+
|
| 136 |
+
## 🎯 Success Checklist
|
| 137 |
+
|
| 138 |
+
- [ ] Space builds without errors
|
| 139 |
+
- [ ] All coaching types work
|
| 140 |
+
- [ ] Resume upload functions
|
| 141 |
+
- [ ] PDF reports generate
|
| 142 |
+
- [ ] Text interaction works
|
| 143 |
+
- [ ] API key configured
|
| 144 |
+
- [ ] Error handling works
|
| 145 |
+
- [ ] Professional appearance
|
| 146 |
+
|
| 147 |
+
## 🌟 Optional Enhancements
|
| 148 |
+
|
| 149 |
+
After successful deployment, consider:
|
| 150 |
+
- Adding more coaching areas
|
| 151 |
+
- Implementing user analytics
|
| 152 |
+
- Adding feedback collection
|
| 153 |
+
- Creating demo videos
|
| 154 |
+
- Writing blog post about the project
|
| 155 |
+
|
| 156 |
+
Your AI Product Coach is now ready to help users prepare for PM interviews! 🎉
|
DOCKERFILE
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Use an official Python runtime as a parent image
|
| 2 |
+
FROM python:3.11-slim
|
| 3 |
+
|
| 4 |
+
# Set the working directory in the container
|
| 5 |
+
WORKDIR /app
|
| 6 |
+
|
| 7 |
+
# Copy the requirements file into the container
|
| 8 |
+
COPY requirements.txt .
|
| 9 |
+
|
| 10 |
+
# Install any needed system dependencies (like for audio)
|
| 11 |
+
RUN apt-get update && apt-get install -y --no-install-recommends \
|
| 12 |
+
ffmpeg \
|
| 13 |
+
&& rm -rf /var/lib/apt/lists/*
|
| 14 |
+
|
| 15 |
+
# Install the Python dependencies
|
| 16 |
+
RUN pip install --no-cache-dir -r requirements.txt
|
| 17 |
+
|
| 18 |
+
# Copy the rest of the application's code into the container
|
| 19 |
+
COPY . .
|
| 20 |
+
|
| 21 |
+
# Expose the port that Gradio runs on
|
| 22 |
+
EXPOSE 7860
|
| 23 |
+
|
| 24 |
+
# Define the command to run your application
|
| 25 |
+
CMD ["python3", "app.py"]
|
README.md
CHANGED
|
@@ -1,12 +1,106 @@
|
|
| 1 |
---
|
| 2 |
-
title: Product
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
-
colorTo:
|
| 6 |
sdk: gradio
|
| 7 |
-
sdk_version:
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
title: AI Product Coach
|
| 3 |
+
emoji: 🎯
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: purple
|
| 6 |
sdk: gradio
|
| 7 |
+
sdk_version: 4.44.0
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
+
license: mit
|
| 11 |
+
short_description: AI-powered PM interview preparation with personalized coaching
|
| 12 |
+
tags:
|
| 13 |
+
- product-management
|
| 14 |
+
- interview-prep
|
| 15 |
+
- ai-coaching
|
| 16 |
+
- career-development
|
| 17 |
+
- gradio
|
| 18 |
---
|
| 19 |
|
| 20 |
+
# 🎯 AI Product Coach - PM Interview Preparation
|
| 21 |
+
|
| 22 |
+
An AI-powered coaching application designed to help aspiring Product Managers prepare for interviews through realistic scenarios and personalized feedback.
|
| 23 |
+
|
| 24 |
+
## 🌟 Features
|
| 25 |
+
|
| 26 |
+
- **11 Coaching Areas**: Product Strategy, Market Research, UX Design, Roadmap Planning, Metrics & Analytics, and more
|
| 27 |
+
- **Resume-Based Personalization**: Upload your resume for personalized interview scenarios
|
| 28 |
+
- **Voice Interaction**: Record audio responses for more realistic interview practice
|
| 29 |
+
- **Professional Reports**: Generate comprehensive PDF coaching reports with visual analysis
|
| 30 |
+
- **Real-time Feedback**: Get detailed evaluation and improvement suggestions
|
| 31 |
+
- **Realistic Scenarios**: Practice with 55+ authentic PM interview questions
|
| 32 |
+
|
| 33 |
+
## 🚀 How to Use
|
| 34 |
+
|
| 35 |
+
1. **Select Focus Area**: Choose from 11 product management coaching areas
|
| 36 |
+
2. **Upload Resume** (Optional): Get personalized questions based on your background
|
| 37 |
+
3. **Practice**: Answer interview questions via text or voice
|
| 38 |
+
4. **Get Feedback**: Receive detailed coaching analysis and scores
|
| 39 |
+
5. **Download Report**: Generate professional PDF reports for review
|
| 40 |
+
|
| 41 |
+
## 🛠️ Setup & Configuration
|
| 42 |
+
|
| 43 |
+
### Hugging Face Spaces Deployment
|
| 44 |
+
|
| 45 |
+
This app is configured to run on Hugging Face Spaces. You'll need:
|
| 46 |
+
|
| 47 |
+
1. A Groq API key (free at [console.groq.com](https://console.groq.com/keys))
|
| 48 |
+
2. Set the `GROQ_API_KEY` environment variable in your Space settings
|
| 49 |
+
|
| 50 |
+
### Local Development
|
| 51 |
+
|
| 52 |
+
1. Clone the repository
|
| 53 |
+
2. Install requirements: `pip install -r requirements.txt`
|
| 54 |
+
3. Create `.env` file with your `GROQ_API_KEY`
|
| 55 |
+
4. Run: `python app.py`
|
| 56 |
+
|
| 57 |
+
## 📋 Coaching Areas
|
| 58 |
+
|
| 59 |
+
- **Product Strategy & Vision** - Strategic thinking and vision articulation
|
| 60 |
+
- **Market Research & Analysis** - Data analysis and market understanding
|
| 61 |
+
- **User Experience & Design Thinking** - User empathy and design process
|
| 62 |
+
- **Product Roadmap Planning** - Prioritization and planning skills
|
| 63 |
+
- **Metrics & Analytics** - Data literacy and analytical thinking
|
| 64 |
+
- **Stakeholder Management** - Communication and relationship building
|
| 65 |
+
- **Product Launch Strategy** - Execution and cross-functional leadership
|
| 66 |
+
- **Competitive Analysis** - Market analysis and opportunity recognition
|
| 67 |
+
- **Feature Prioritization** - Decision making and framework application
|
| 68 |
+
- **Customer Development** - Customer empathy and insight generation
|
| 69 |
+
- **Resume & Application Strategy** - Personal branding and interview preparation
|
| 70 |
+
|
| 71 |
+
## 🤖 AI Models Used
|
| 72 |
+
|
| 73 |
+
- **Groq (llama3-8b-8192)** - Question generation and response evaluation
|
| 74 |
+
- **Whisper** - Speech-to-text transcription
|
| 75 |
+
- **System TTS** - Text-to-speech for question delivery
|
| 76 |
+
|
| 77 |
+
## 📊 Report Features
|
| 78 |
+
|
| 79 |
+
- Professional PDF formatting
|
| 80 |
+
- Visual progress charts
|
| 81 |
+
- Detailed scenario analysis
|
| 82 |
+
- Personalized improvement recommendations
|
| 83 |
+
- Scoring across key PM competencies
|
| 84 |
+
|
| 85 |
+
## 🔧 Technical Stack
|
| 86 |
+
|
| 87 |
+
- **Frontend**: Gradio 4.x
|
| 88 |
+
- **AI/ML**: Groq API, OpenAI Whisper
|
| 89 |
+
- **Document Processing**: PyMuPDF, python-docx
|
| 90 |
+
- **Report Generation**: ReportLab, Matplotlib
|
| 91 |
+
- **Audio Processing**: SpeechRecognition, PyAudio
|
| 92 |
+
|
| 93 |
+
## 🎯 Target Users
|
| 94 |
+
|
| 95 |
+
- Aspiring Product Managers preparing for interviews
|
| 96 |
+
- Current PMs looking to improve interview skills
|
| 97 |
+
- Career changers transitioning into Product Management
|
| 98 |
+
- Students studying product management
|
| 99 |
+
|
| 100 |
+
## 🤝 Contributing
|
| 101 |
+
|
| 102 |
+
Feel free to open issues or submit pull requests to improve the coaching experience!
|
| 103 |
+
|
| 104 |
+
## 📄 License
|
| 105 |
+
|
| 106 |
+
This project is open source and available under the MIT License.
|
README_HF.md
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
title: AI Product Coach
|
| 2 |
+
emoji: 🎯
|
| 3 |
+
colorFrom: blue
|
| 4 |
+
colorTo: purple
|
| 5 |
+
sdk: gradio
|
| 6 |
+
sdk_version: 4.44.0
|
| 7 |
+
app_file: app.py
|
| 8 |
+
pinned: false
|
| 9 |
+
license: mit
|
| 10 |
+
short_description: AI-powered PM interview preparation with personalized coaching
|
| 11 |
+
tags:
|
| 12 |
+
- product-management
|
| 13 |
+
- interview-prep
|
| 14 |
+
- ai-coaching
|
| 15 |
+
- career-development
|
| 16 |
+
- gradio
|
app.py
ADDED
|
@@ -0,0 +1,354 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# app.py
|
| 2 |
+
import gradio as gr
|
| 3 |
+
import os
|
| 4 |
+
import time
|
| 5 |
+
import datetime
|
| 6 |
+
import random
|
| 7 |
+
import config
|
| 8 |
+
from modules.tts_handler import text_to_speech_file
|
| 9 |
+
from modules.stt_handler import transcribe_audio
|
| 10 |
+
from modules.doc_processor import extract_text_from_document
|
| 11 |
+
from modules.llm_handler import generate_coaching_question, evaluate_response, generate_coaching_feedback, get_overall_score, parse_scores_from_evaluation, get_score_interpretation
|
| 12 |
+
from modules.report_generator import generate_pdf_report
|
| 13 |
+
|
| 14 |
+
def test_recording():
|
| 15 |
+
"""Simple test to enable recording"""
|
| 16 |
+
return gr.update(interactive=True), "✅ Recording enabled! Try recording something now."
|
| 17 |
+
|
| 18 |
+
def start_coaching_session(coaching_type, doc_file, name, num_questions):
|
| 19 |
+
if not coaching_type:
|
| 20 |
+
return (
|
| 21 |
+
{}, # state
|
| 22 |
+
[{"role": "assistant", "content": "Please select a product management focus area to begin your coaching session."}], # chatbot
|
| 23 |
+
None, # audio_out
|
| 24 |
+
gr.update(interactive=False), # audio_in
|
| 25 |
+
gr.update(interactive=True) # start_btn
|
| 26 |
+
)
|
| 27 |
+
|
| 28 |
+
doc_text = ""
|
| 29 |
+
if doc_file:
|
| 30 |
+
doc_text = extract_text_from_document(doc_file.name)
|
| 31 |
+
if "Error" in doc_text or "Unsupported" in doc_text:
|
| 32 |
+
return (
|
| 33 |
+
{}, # state
|
| 34 |
+
[{"role": "assistant", "content": f"Error processing document: {doc_text}. Continuing without document context."}], # chatbot
|
| 35 |
+
None, # audio_out
|
| 36 |
+
gr.update(interactive=False), # audio_in
|
| 37 |
+
gr.update(interactive=True) # start_btn
|
| 38 |
+
)
|
| 39 |
+
|
| 40 |
+
initial_state = {
|
| 41 |
+
"coaching_type": coaching_type,
|
| 42 |
+
"doc_text": doc_text,
|
| 43 |
+
"name": name if name else "Product Manager",
|
| 44 |
+
"question_count": int(num_questions),
|
| 45 |
+
"current_question_num": 1,
|
| 46 |
+
"coaching_log": []
|
| 47 |
+
}
|
| 48 |
+
|
| 49 |
+
first_question = generate_coaching_question(coaching_type, doc_text, 1)
|
| 50 |
+
initial_state["current_question_text"] = first_question
|
| 51 |
+
greeting = f"Hello {initial_state['name']}! Welcome to your personal AI product coaching session on {coaching_type}. We'll explore {int(num_questions)} scenarios together. Let's start with the first one:"
|
| 52 |
+
full_message = f"{greeting}\n\n{first_question}\n\n🎙️ Click the microphone button above to record your response!"
|
| 53 |
+
|
| 54 |
+
tts_prompt = f"{greeting} {first_question}"
|
| 55 |
+
ai_voice_path = text_to_speech_file(tts_prompt)
|
| 56 |
+
|
| 57 |
+
return (
|
| 58 |
+
initial_state, # state
|
| 59 |
+
[{"role": "assistant", "content": full_message}], # chatbot
|
| 60 |
+
ai_voice_path, # audio_out
|
| 61 |
+
gr.update(interactive=True), # audio_in - ENABLE recording
|
| 62 |
+
gr.update(interactive=False) # start_btn - disable after starting
|
| 63 |
+
)
|
| 64 |
+
|
| 65 |
+
def handle_coaching_turn(user_audio, chatbot_history, current_state):
|
| 66 |
+
"""Handle each coaching turn - one question at a time with immediate feedback"""
|
| 67 |
+
|
| 68 |
+
print(f"🎙️ Audio recording received: {user_audio}")
|
| 69 |
+
print(f"📝 Current state: {current_state is not None}")
|
| 70 |
+
print(f"💬 Chat history length: {len(chatbot_history) if chatbot_history else 0}")
|
| 71 |
+
|
| 72 |
+
# Check if we have valid inputs
|
| 73 |
+
if not user_audio:
|
| 74 |
+
print("❌ No audio received - returning early")
|
| 75 |
+
return chatbot_history, current_state, gr.update(interactive=True), gr.update(visible=False), None
|
| 76 |
+
|
| 77 |
+
if not current_state or "current_question_text" not in current_state:
|
| 78 |
+
print("❌ Invalid session state - returning early")
|
| 79 |
+
return chatbot_history, current_state, gr.update(interactive=True), gr.update(visible=False), None
|
| 80 |
+
|
| 81 |
+
print(f"✅ Processing audio file: {user_audio}")
|
| 82 |
+
|
| 83 |
+
# Transcribe user response
|
| 84 |
+
try:
|
| 85 |
+
user_response_text = transcribe_audio(user_audio)
|
| 86 |
+
print(f"✅ Transcribed: {user_response_text[:100]}...")
|
| 87 |
+
|
| 88 |
+
if user_response_text.startswith("Sorry, I") or "error" in user_response_text.lower():
|
| 89 |
+
# Transcription failed, return error message
|
| 90 |
+
if not chatbot_history:
|
| 91 |
+
chatbot_history = []
|
| 92 |
+
chatbot_history.append({"role": "assistant", "content": user_response_text})
|
| 93 |
+
return chatbot_history, current_state, gr.update(interactive=True), gr.update(visible=False), None
|
| 94 |
+
|
| 95 |
+
except Exception as e:
|
| 96 |
+
print(f"❌ Transcription error: {e}")
|
| 97 |
+
error_msg = "Sorry, I couldn't process your audio. Please try recording again."
|
| 98 |
+
if not chatbot_history:
|
| 99 |
+
chatbot_history = []
|
| 100 |
+
chatbot_history.append({"role": "assistant", "content": error_msg})
|
| 101 |
+
return chatbot_history, current_state, gr.update(interactive=True), gr.update(visible=False), None
|
| 102 |
+
|
| 103 |
+
# Initialize chat history if empty
|
| 104 |
+
if not chatbot_history:
|
| 105 |
+
chatbot_history = []
|
| 106 |
+
|
| 107 |
+
# Add user response to chat
|
| 108 |
+
chatbot_history.append({"role": "user", "content": user_response_text})
|
| 109 |
+
|
| 110 |
+
# Get feedback from AI
|
| 111 |
+
try:
|
| 112 |
+
feedback_text = evaluate_response(
|
| 113 |
+
current_state["current_question_text"],
|
| 114 |
+
user_response_text,
|
| 115 |
+
current_state["coaching_type"]
|
| 116 |
+
)
|
| 117 |
+
|
| 118 |
+
# Parse scores
|
| 119 |
+
scores = parse_scores_from_evaluation(feedback_text)
|
| 120 |
+
overall_score = get_overall_score(scores)
|
| 121 |
+
score_interpretation = get_score_interpretation(overall_score)
|
| 122 |
+
|
| 123 |
+
# Add score to feedback
|
| 124 |
+
if overall_score > 0:
|
| 125 |
+
score_summary = f"📊 SCORE: {overall_score}/10 - {score_interpretation}\n\n"
|
| 126 |
+
feedback_with_score = score_summary + feedback_text
|
| 127 |
+
else:
|
| 128 |
+
feedback_with_score = feedback_text
|
| 129 |
+
|
| 130 |
+
except Exception as e:
|
| 131 |
+
print(f"❌ Evaluation error: {e}")
|
| 132 |
+
feedback_with_score = "Thank you for your response. Let me give you some feedback on your approach."
|
| 133 |
+
overall_score = 7 # Default score
|
| 134 |
+
scores = {}
|
| 135 |
+
|
| 136 |
+
# Store in coaching log
|
| 137 |
+
current_state["coaching_log"].append({
|
| 138 |
+
"question": current_state["current_question_text"],
|
| 139 |
+
"response": user_response_text,
|
| 140 |
+
"feedback": feedback_with_score,
|
| 141 |
+
"scores": scores,
|
| 142 |
+
"overall_score": overall_score
|
| 143 |
+
})
|
| 144 |
+
|
| 145 |
+
# Add AI feedback to chat
|
| 146 |
+
chatbot_history.append({"role": "assistant", "content": feedback_with_score})
|
| 147 |
+
|
| 148 |
+
# Generate voice feedback (with error handling)
|
| 149 |
+
try:
|
| 150 |
+
ai_feedback_voice = text_to_speech_file(f"Your score is {overall_score} out of 10. {feedback_text[:100]}...")
|
| 151 |
+
except Exception as e:
|
| 152 |
+
print(f"❌ TTS error: {e}")
|
| 153 |
+
ai_feedback_voice = None
|
| 154 |
+
|
| 155 |
+
# Check if session is complete
|
| 156 |
+
if current_state["current_question_num"] >= current_state["question_count"]:
|
| 157 |
+
# Session complete - generate report
|
| 158 |
+
session_scores = [item.get("overall_score", 0) for item in current_state["coaching_log"]]
|
| 159 |
+
avg_score = sum(session_scores) / len(session_scores) if session_scores else 0
|
| 160 |
+
session_interpretation = get_score_interpretation(avg_score)
|
| 161 |
+
|
| 162 |
+
end_message = f"🎉 Session Complete!\n\n📈 FINAL AVERAGE: {avg_score:.1f}/10 - {session_interpretation}\n\nGenerating your report..."
|
| 163 |
+
chatbot_history.append({"role": "assistant", "content": end_message})
|
| 164 |
+
|
| 165 |
+
# Generate PDF report
|
| 166 |
+
try:
|
| 167 |
+
pdf_path = generate_pdf_file(current_state)
|
| 168 |
+
print(f"✅ Report generated: {pdf_path}")
|
| 169 |
+
except Exception as e:
|
| 170 |
+
print(f"❌ Report generation error: {e}")
|
| 171 |
+
pdf_path = None
|
| 172 |
+
|
| 173 |
+
try:
|
| 174 |
+
end_voice = text_to_speech_file("Congratulations! Your coaching session is complete.")
|
| 175 |
+
except Exception as e:
|
| 176 |
+
print(f"❌ TTS error: {e}")
|
| 177 |
+
end_voice = ai_feedback_voice
|
| 178 |
+
|
| 179 |
+
return (
|
| 180 |
+
chatbot_history,
|
| 181 |
+
current_state,
|
| 182 |
+
gr.update(interactive=False), # Disable recording
|
| 183 |
+
gr.update(value=pdf_path, visible=True) if pdf_path else gr.update(visible=False), # Show download button
|
| 184 |
+
end_voice
|
| 185 |
+
)
|
| 186 |
+
else:
|
| 187 |
+
# Move to next question
|
| 188 |
+
current_state["current_question_num"] += 1
|
| 189 |
+
next_question = generate_coaching_question(
|
| 190 |
+
current_state["coaching_type"],
|
| 191 |
+
current_state["doc_text"],
|
| 192 |
+
current_state["current_question_num"]
|
| 193 |
+
)
|
| 194 |
+
current_state["current_question_text"] = next_question
|
| 195 |
+
|
| 196 |
+
q_num = current_state["current_question_num"]
|
| 197 |
+
next_message = f"🎯 Question {q_num}/{current_state['question_count']}:\n\n{next_question}"
|
| 198 |
+
chatbot_history.append({"role": "assistant", "content": next_message})
|
| 199 |
+
|
| 200 |
+
# Generate voice for next question
|
| 201 |
+
try:
|
| 202 |
+
next_voice = text_to_speech_file(f"Question {q_num}. {next_question}")
|
| 203 |
+
except Exception as e:
|
| 204 |
+
print(f"❌ TTS error: {e}")
|
| 205 |
+
next_voice = ai_feedback_voice
|
| 206 |
+
|
| 207 |
+
return (
|
| 208 |
+
chatbot_history,
|
| 209 |
+
current_state,
|
| 210 |
+
gr.update(interactive=True), # Keep recording enabled
|
| 211 |
+
gr.update(visible=False), # Keep download hidden
|
| 212 |
+
next_voice
|
| 213 |
+
)
|
| 214 |
+
|
| 215 |
+
def generate_pdf_file(state):
|
| 216 |
+
final_data = {
|
| 217 |
+
"name": state["name"],
|
| 218 |
+
"type": state["coaching_type"],
|
| 219 |
+
"q_and_a": state["coaching_log"]
|
| 220 |
+
}
|
| 221 |
+
file_name = f"Product_Coaching_Report_{state['name']}_{datetime.datetime.now().strftime('%Y-%m-%d')}.pdf"
|
| 222 |
+
file_path = os.path.join(config.REPORT_FOLDER, file_name)
|
| 223 |
+
generate_pdf_report(final_data, file_path)
|
| 224 |
+
return file_path
|
| 225 |
+
|
| 226 |
+
with gr.Blocks(theme=gr.themes.Default()) as app:
|
| 227 |
+
state = gr.State({})
|
| 228 |
+
gr.Markdown("# 🚀 Personal AI Product Coach")
|
| 229 |
+
gr.Markdown("### Elevate your product management skills with personalized coaching sessions")
|
| 230 |
+
|
| 231 |
+
with gr.Row():
|
| 232 |
+
with gr.Column(scale=1):
|
| 233 |
+
gr.Markdown("### Setup Your Coaching Session")
|
| 234 |
+
user_name = gr.Textbox(label="Your Name", placeholder="Enter your name")
|
| 235 |
+
|
| 236 |
+
coaching_type_dd = gr.Dropdown(
|
| 237 |
+
config.COACHING_TYPES,
|
| 238 |
+
label="Product Management Focus Area"
|
| 239 |
+
)
|
| 240 |
+
gr.Markdown("*Choose the area you want to improve*")
|
| 241 |
+
|
| 242 |
+
num_questions_slider = gr.Slider(
|
| 243 |
+
minimum=1, maximum=10, value=5, step=1,
|
| 244 |
+
label="Number of Scenarios"
|
| 245 |
+
)
|
| 246 |
+
gr.Markdown("*How many coaching scenarios do you want to practice?*")
|
| 247 |
+
|
| 248 |
+
doc_uploader = gr.File(
|
| 249 |
+
label="Upload Your Resume/Portfolio (Optional)",
|
| 250 |
+
file_types=[".pdf", ".docx", ".txt"]
|
| 251 |
+
)
|
| 252 |
+
gr.Markdown("*Upload your resume or product portfolio for personalized coaching*")
|
| 253 |
+
|
| 254 |
+
start_btn = gr.Button("🎯 Start Coaching Session", variant="primary")
|
| 255 |
+
|
| 256 |
+
with gr.Column(scale=2):
|
| 257 |
+
chatbot = gr.Chatbot(
|
| 258 |
+
label="AI Product Coach",
|
| 259 |
+
height=500,
|
| 260 |
+
type="messages"
|
| 261 |
+
)
|
| 262 |
+
|
| 263 |
+
with gr.Row():
|
| 264 |
+
# Use a simpler audio input approach
|
| 265 |
+
audio_in = gr.Audio(
|
| 266 |
+
sources=["microphone"],
|
| 267 |
+
type="filepath",
|
| 268 |
+
label="🎙️ Record Your Response",
|
| 269 |
+
interactive=False,
|
| 270 |
+
show_download_button=False
|
| 271 |
+
)
|
| 272 |
+
# Add manual submit button for reliability
|
| 273 |
+
submit_audio_btn = gr.Button("📤 Submit Audio", variant="secondary", visible=False)
|
| 274 |
+
|
| 275 |
+
# Add a test button to check if recording works
|
| 276 |
+
test_btn = gr.Button("🧪 Test Recording", variant="secondary", visible=True)
|
| 277 |
+
status_text = gr.Textbox(label="Status", value="Click 'Start Coaching Session' to begin", interactive=False)
|
| 278 |
+
|
| 279 |
+
download_pdf_btn = gr.File(label="📊 Download Coaching Report", visible=False)
|
| 280 |
+
audio_out = gr.Audio(visible=False, autoplay=True)
|
| 281 |
+
|
| 282 |
+
start_btn.click(
|
| 283 |
+
fn=start_coaching_session,
|
| 284 |
+
inputs=[coaching_type_dd, doc_uploader, user_name, num_questions_slider],
|
| 285 |
+
outputs=[state, chatbot, audio_out, audio_in, start_btn]
|
| 286 |
+
)
|
| 287 |
+
|
| 288 |
+
test_btn.click(
|
| 289 |
+
fn=test_recording,
|
| 290 |
+
outputs=[audio_in, status_text]
|
| 291 |
+
)
|
| 292 |
+
|
| 293 |
+
# Multiple event handlers for better reliability
|
| 294 |
+
audio_in.change(
|
| 295 |
+
fn=handle_coaching_turn,
|
| 296 |
+
inputs=[audio_in, chatbot, state],
|
| 297 |
+
outputs=[chatbot, state, audio_in, download_pdf_btn, audio_out]
|
| 298 |
+
)
|
| 299 |
+
|
| 300 |
+
# Also try upload event
|
| 301 |
+
audio_in.upload(
|
| 302 |
+
fn=handle_coaching_turn,
|
| 303 |
+
inputs=[audio_in, chatbot, state],
|
| 304 |
+
outputs=[chatbot, state, audio_in, download_pdf_btn, audio_out]
|
| 305 |
+
)
|
| 306 |
+
|
| 307 |
+
# Manual submit button as backup
|
| 308 |
+
submit_audio_btn.click(
|
| 309 |
+
fn=handle_coaching_turn,
|
| 310 |
+
inputs=[audio_in, chatbot, state],
|
| 311 |
+
outputs=[chatbot, state, audio_in, download_pdf_btn, audio_out]
|
| 312 |
+
)
|
| 313 |
+
|
| 314 |
+
# Add footer with helpful information
|
| 315 |
+
gr.Markdown("""
|
| 316 |
+
---
|
| 317 |
+
### 💡 Coaching Tips
|
| 318 |
+
- **Speak naturally** - Think out loud and explain your reasoning
|
| 319 |
+
- **Use frameworks** - Apply PM frameworks like RICE, Jobs-to-be-Done, etc.
|
| 320 |
+
- **Be specific** - Provide concrete examples and metrics when possible
|
| 321 |
+
- **Ask questions** - Great PMs ask clarifying questions about requirements
|
| 322 |
+
|
| 323 |
+
### 🎯 What Makes a Great Response?
|
| 324 |
+
✅ Clear problem identification
|
| 325 |
+
✅ Structured thinking approach
|
| 326 |
+
✅ Data-driven reasoning
|
| 327 |
+
✅ Stakeholder consideration
|
| 328 |
+
✅ Action-oriented next steps
|
| 329 |
+
""")
|
| 330 |
+
|
| 331 |
+
if __name__ == "__main__":
|
| 332 |
+
# Ensure required directories exist
|
| 333 |
+
os.makedirs(config.UPLOAD_FOLDER, exist_ok=True)
|
| 334 |
+
os.makedirs(config.REPORT_FOLDER, exist_ok=True)
|
| 335 |
+
|
| 336 |
+
# Print startup message
|
| 337 |
+
print("🚀 Personal AI Product Coach Starting...")
|
| 338 |
+
print("📚 Focus Areas Available:", ", ".join(config.COACHING_TYPES))
|
| 339 |
+
print("🎯 Ready to help you improve your product management skills!")
|
| 340 |
+
print("=" * 60)
|
| 341 |
+
|
| 342 |
+
# Check for API key
|
| 343 |
+
if not config.GROQ_API_KEY:
|
| 344 |
+
print("⚠️ WARNING: GROQ_API_KEY not found!")
|
| 345 |
+
print("Please set your Groq API key as an environment variable.")
|
| 346 |
+
print("Get your free key from: https://console.groq.com/keys")
|
| 347 |
+
|
| 348 |
+
# Launch the app - Configure for both local and HuggingFace deployment
|
| 349 |
+
app.launch(
|
| 350 |
+
server_name="0.0.0.0" if os.getenv("SPACE_ID") else "127.0.0.1",
|
| 351 |
+
server_port=None if os.getenv("SPACE_ID") else 7861,
|
| 352 |
+
share=False,
|
| 353 |
+
show_error=True
|
| 354 |
+
)
|
app_readme.md
ADDED
|
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: AI Product Coach
|
| 3 |
+
emoji: 🎯
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: purple
|
| 6 |
+
sdk: gradio
|
| 7 |
+
sdk_version: 4.44.0
|
| 8 |
+
app_file: app.py
|
| 9 |
+
pinned: false
|
| 10 |
+
license: mit
|
| 11 |
+
short_description: AI-powered PM interview preparation with personalized coaching
|
| 12 |
+
tags:
|
| 13 |
+
- product-management
|
| 14 |
+
- interview-prep
|
| 15 |
+
- ai-coaching
|
| 16 |
+
- career-development
|
| 17 |
+
- gradio
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
# 🎯 AI Product Coach - PM Interview Preparation
|
| 21 |
+
|
| 22 |
+
An AI-powered coaching application designed to help aspiring Product Managers prepare for interviews through realistic scenarios and personalized feedback.
|
| 23 |
+
|
| 24 |
+
## 🌟 Features
|
| 25 |
+
|
| 26 |
+
- **11 Coaching Areas**: Product Strategy, Market Research, UX Design, Roadmap Planning, Metrics & Analytics, and more
|
| 27 |
+
- **Resume-Based Personalization**: Upload your resume for personalized interview scenarios
|
| 28 |
+
- **Voice Interaction**: Record audio responses for more realistic interview practice
|
| 29 |
+
- **Professional Reports**: Generate comprehensive PDF coaching reports with visual analysis
|
| 30 |
+
- **Real-time Feedback**: Get detailed evaluation and improvement suggestions
|
| 31 |
+
- **Realistic Scenarios**: Practice with 55+ authentic PM interview questions
|
| 32 |
+
|
| 33 |
+
## 🚀 How to Use
|
| 34 |
+
|
| 35 |
+
1. **Select Focus Area**: Choose from 11 product management coaching areas
|
| 36 |
+
2. **Upload Resume** (Optional): Get personalized questions based on your background
|
| 37 |
+
3. **Practice**: Answer interview questions via text or voice
|
| 38 |
+
4. **Get Feedback**: Receive detailed coaching analysis and scores
|
| 39 |
+
5. **Download Report**: Generate professional PDF reports for review
|
| 40 |
+
|
| 41 |
+
## ⚙️ Configuration Required
|
| 42 |
+
|
| 43 |
+
**Important**: This app requires a Groq API key to function.
|
| 44 |
+
|
| 45 |
+
1. Get your free API key from [console.groq.com](https://console.groq.com/keys)
|
| 46 |
+
2. In your Hugging Face Space settings, add the environment variable:
|
| 47 |
+
- **Variable Name**: `GROQ_API_KEY`
|
| 48 |
+
- **Variable Value**: Your Groq API key
|
| 49 |
+
|
| 50 |
+
## 🤖 AI Models Used
|
| 51 |
+
|
| 52 |
+
- **Groq (llama3-8b-8192)** - Question generation and response evaluation
|
| 53 |
+
- **Whisper** - Speech-to-text transcription
|
| 54 |
+
- **System TTS** - Text-to-speech for question delivery
|
| 55 |
+
|
| 56 |
+
## 🔧 Technical Stack
|
| 57 |
+
|
| 58 |
+
- **Frontend**: Gradio 4.x
|
| 59 |
+
- **AI/ML**: Groq API, OpenAI Whisper
|
| 60 |
+
- **Document Processing**: PyMuPDF, python-docx
|
| 61 |
+
- **Report Generation**: ReportLab, Matplotlib
|
config.py
ADDED
|
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# config.py
|
| 2 |
+
import os
|
| 3 |
+
from dotenv import load_dotenv
|
| 4 |
+
|
| 5 |
+
# Load environment variables from .env file
|
| 6 |
+
load_dotenv()
|
| 7 |
+
|
| 8 |
+
# -- API Configuration --
|
| 9 |
+
GROQ_API_KEY = os.getenv('GROQ_API_KEY')
|
| 10 |
+
GROQ_MODEL = os.getenv('GROQ_MODEL', 'llama3-8b-8192') # Default model
|
| 11 |
+
DEBUG_MODE = os.getenv('DEBUG', 'false').lower() == 'true'
|
| 12 |
+
|
| 13 |
+
# Validate API key
|
| 14 |
+
if not GROQ_API_KEY:
|
| 15 |
+
print("⚠️ WARNING: GROQ_API_KEY not found!")
|
| 16 |
+
print("Please set your Groq API key in one of these ways:")
|
| 17 |
+
print("1. Create a .env file with: GROQ_API_KEY=your_api_key")
|
| 18 |
+
print("2. Set environment variable: export GROQ_API_KEY=your_api_key")
|
| 19 |
+
print("3. Get your free key from: https://console.groq.com/keys")
|
| 20 |
+
|
| 21 |
+
# -- Ollama Configuration (Legacy) --
|
| 22 |
+
OLLAMA_MODEL = 'llama3.1'
|
| 23 |
+
|
| 24 |
+
# -- Product Coaching Configuration --
|
| 25 |
+
COACHING_TYPES = [
|
| 26 |
+
'Product Strategy & Vision',
|
| 27 |
+
'Market Research & Analysis',
|
| 28 |
+
'User Experience & Design Thinking',
|
| 29 |
+
'Product Roadmap Planning',
|
| 30 |
+
'Metrics & Analytics',
|
| 31 |
+
'Stakeholder Management',
|
| 32 |
+
'Product Launch Strategy',
|
| 33 |
+
'Competitive Analysis',
|
| 34 |
+
'Feature Prioritization',
|
| 35 |
+
'Customer Development',
|
| 36 |
+
'Resume & Application Strategy'
|
| 37 |
+
]
|
| 38 |
+
|
| 39 |
+
# -- Product Management Focus Areas --
|
| 40 |
+
FOCUS_AREAS = [
|
| 41 |
+
'Product Discovery',
|
| 42 |
+
'Product Development',
|
| 43 |
+
'Go-to-Market Strategy',
|
| 44 |
+
'Product Optimization',
|
| 45 |
+
'Team Leadership',
|
| 46 |
+
'Data-Driven Decisions'
|
| 47 |
+
]
|
| 48 |
+
|
| 49 |
+
# -- Piper TTS Configuration --
|
| 50 |
+
PIPER_VOICE_MODEL = './voice_model/en_US-lessac-medium.onnx'
|
| 51 |
+
|
| 52 |
+
# -- Directories --
|
| 53 |
+
UPLOAD_FOLDER = 'uploads'
|
| 54 |
+
REPORT_FOLDER = 'reports'
|
deploy.sh
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
|
| 3 |
+
# 🚀 AI Product Coach - Hugging Face Deployment Script
|
| 4 |
+
# This script helps prepare your app for Hugging Face Spaces deployment
|
| 5 |
+
|
| 6 |
+
echo "🎯 AI Product Coach - Hugging Face Deployment Preparation"
|
| 7 |
+
echo "=========================================================="
|
| 8 |
+
|
| 9 |
+
# Check if we're in the right directory
|
| 10 |
+
if [ ! -f "app.py" ]; then
|
| 11 |
+
echo "❌ Error: app.py not found. Please run this script from the project root directory."
|
| 12 |
+
exit 1
|
| 13 |
+
fi
|
| 14 |
+
|
| 15 |
+
echo "✅ Found app.py - you're in the right directory"
|
| 16 |
+
|
| 17 |
+
# Check for required files
|
| 18 |
+
required_files=("app.py" "config.py" "requirements.txt" "README.md")
|
| 19 |
+
missing_files=()
|
| 20 |
+
|
| 21 |
+
for file in "${required_files[@]}"; do
|
| 22 |
+
if [ ! -f "$file" ]; then
|
| 23 |
+
missing_files+=("$file")
|
| 24 |
+
fi
|
| 25 |
+
done
|
| 26 |
+
|
| 27 |
+
if [ ${#missing_files[@]} -ne 0 ]; then
|
| 28 |
+
echo "❌ Missing required files: ${missing_files[*]}"
|
| 29 |
+
exit 1
|
| 30 |
+
fi
|
| 31 |
+
|
| 32 |
+
echo "✅ All required files present"
|
| 33 |
+
|
| 34 |
+
# Check for modules directory
|
| 35 |
+
if [ ! -d "modules" ]; then
|
| 36 |
+
echo "❌ Error: modules/ directory not found"
|
| 37 |
+
exit 1
|
| 38 |
+
fi
|
| 39 |
+
|
| 40 |
+
echo "✅ Found modules directory"
|
| 41 |
+
|
| 42 |
+
# Check if README has HF metadata
|
| 43 |
+
if ! grep -q "title: AI Product Coach" README.md; then
|
| 44 |
+
echo "⚠️ Warning: README.md missing Hugging Face metadata"
|
| 45 |
+
echo " The metadata header has already been added to your README.md"
|
| 46 |
+
fi
|
| 47 |
+
|
| 48 |
+
echo "✅ README.md has proper HF metadata"
|
| 49 |
+
|
| 50 |
+
# Check for API key configuration
|
| 51 |
+
if ! grep -q "GROQ_API_KEY" config.py; then
|
| 52 |
+
echo "❌ Error: GROQ_API_KEY configuration not found in config.py"
|
| 53 |
+
exit 1
|
| 54 |
+
fi
|
| 55 |
+
|
| 56 |
+
echo "✅ API key configuration found"
|
| 57 |
+
|
| 58 |
+
# Display next steps
|
| 59 |
+
echo ""
|
| 60 |
+
echo "🎉 Deployment Preparation Complete!"
|
| 61 |
+
echo ""
|
| 62 |
+
echo "Next Steps:"
|
| 63 |
+
echo "1. Get your Groq API key from: https://console.groq.com/keys"
|
| 64 |
+
echo "2. Create a new Hugging Face Space at: https://huggingface.co/spaces"
|
| 65 |
+
echo "3. Upload all files to your Space (or use git clone/push)"
|
| 66 |
+
echo "4. Set GROQ_API_KEY as environment variable in Space settings"
|
| 67 |
+
echo "5. Wait for build to complete and test your app!"
|
| 68 |
+
echo ""
|
| 69 |
+
echo "📚 See DEPLOYMENT.md for detailed instructions"
|
| 70 |
+
echo ""
|
| 71 |
+
echo "Your AI Product Coach is ready for deployment! 🚀"
|
modules/__init__.py
ADDED
|
File without changes
|
modules/doc_processor.py
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# modules/doc_processor.py
|
| 2 |
+
|
| 3 |
+
import fitz # PyMuPDF for PDFs
|
| 4 |
+
import docx # python-docx for DOCX files
|
| 5 |
+
import os
|
| 6 |
+
|
| 7 |
+
def extract_text_from_document(file_path):
|
| 8 |
+
"""
|
| 9 |
+
Extracts text from a given document (PDF or DOCX).
|
| 10 |
+
"""
|
| 11 |
+
text = ""
|
| 12 |
+
try:
|
| 13 |
+
_, file_extension = os.path.splitext(file_path)
|
| 14 |
+
|
| 15 |
+
if file_extension.lower() == '.pdf':
|
| 16 |
+
with fitz.open(file_path) as doc:
|
| 17 |
+
for page in doc:
|
| 18 |
+
text += page.get_text()
|
| 19 |
+
elif file_extension.lower() == '.docx':
|
| 20 |
+
doc = docx.Document(file_path)
|
| 21 |
+
for para in doc.paragraphs:
|
| 22 |
+
text += para.text + "\n"
|
| 23 |
+
else:
|
| 24 |
+
return "Unsupported file format. Please upload a .pdf or .docx file."
|
| 25 |
+
|
| 26 |
+
except Exception as e:
|
| 27 |
+
return f"Error reading document: {e}"
|
| 28 |
+
|
| 29 |
+
return text
|
modules/llm_handler.py
ADDED
|
@@ -0,0 +1,385 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# modules/llm_handler.py
|
| 2 |
+
import os
|
| 3 |
+
import config
|
| 4 |
+
import regex as re
|
| 5 |
+
from groq import Groq
|
| 6 |
+
from modules.web_search import search_for_example_answers
|
| 7 |
+
from modules.pm_frameworks import get_framework_suggestion, get_relevant_metrics
|
| 8 |
+
|
| 9 |
+
# Initialize the Groq client with proper error handling
|
| 10 |
+
def get_groq_client():
|
| 11 |
+
api_key = config.GROQ_API_KEY or os.environ.get("GROQ_API_KEY")
|
| 12 |
+
if not api_key:
|
| 13 |
+
raise ValueError("GROQ_API_KEY is required. Please set it in .env file or environment variables.")
|
| 14 |
+
return Groq(api_key=api_key)
|
| 15 |
+
|
| 16 |
+
try:
|
| 17 |
+
client = get_groq_client()
|
| 18 |
+
MODEL = config.GROQ_MODEL
|
| 19 |
+
print(f"✅ Groq API connected successfully with model: {MODEL}")
|
| 20 |
+
except Exception as e:
|
| 21 |
+
print(f"❌ Failed to initialize Groq client: {e}")
|
| 22 |
+
client = None
|
| 23 |
+
MODEL = "llama3-8b-8192"
|
| 24 |
+
|
| 25 |
+
def generate_coaching_question(coaching_type, document_text, question_number):
|
| 26 |
+
"""Generate a personalized, interview-style product management coaching question based on resume."""
|
| 27 |
+
|
| 28 |
+
if not client:
|
| 29 |
+
return "I need to ask you a product management scenario, but there's an API connection issue."
|
| 30 |
+
|
| 31 |
+
# Base scenarios for each coaching type
|
| 32 |
+
base_scenarios = {
|
| 33 |
+
'Product Strategy & Vision': [
|
| 34 |
+
"developing product vision for a struggling mobile application",
|
| 35 |
+
"evaluating a strategic pivot from B2C to B2B",
|
| 36 |
+
"presenting product strategy to secure funding",
|
| 37 |
+
"responding to competitive threats strategically",
|
| 38 |
+
"integrating acquired startup products"
|
| 39 |
+
],
|
| 40 |
+
'Market Research & Analysis': [
|
| 41 |
+
"conducting competitive analysis after competitor launch",
|
| 42 |
+
"expanding into new geographic markets",
|
| 43 |
+
"resolving conflicting user research feedback",
|
| 44 |
+
"researching with limited budget constraints",
|
| 45 |
+
"validating demand for new product category"
|
| 46 |
+
],
|
| 47 |
+
'User Experience & Design Thinking': [
|
| 48 |
+
"redesigning high drop-off onboarding flow",
|
| 49 |
+
"investigating declining Net Promoter Score",
|
| 50 |
+
"balancing UX goals with technical constraints",
|
| 51 |
+
"prioritizing UX improvements across segments",
|
| 52 |
+
"leading design sprint for new experience"
|
| 53 |
+
],
|
| 54 |
+
'Product Roadmap Planning': [
|
| 55 |
+
"prioritizing features with limited engineering capacity",
|
| 56 |
+
"handling sales pressure vs roadmap alignment",
|
| 57 |
+
"communicating roadmap changes after setbacks",
|
| 58 |
+
"re-prioritizing due to security vulnerabilities",
|
| 59 |
+
"adjusting roadmap for aggressive growth targets"
|
| 60 |
+
],
|
| 61 |
+
'Metrics & Analytics': [
|
| 62 |
+
"investigating sudden drop in engagement metrics",
|
| 63 |
+
"defining success metrics for new premium tier",
|
| 64 |
+
"designing A/B test for conversion optimization",
|
| 65 |
+
"selecting North Star metric for product team",
|
| 66 |
+
"analyzing low-adoption, high-engagement features"
|
| 67 |
+
],
|
| 68 |
+
'Stakeholder Management': [
|
| 69 |
+
"navigating conflicting priorities across teams",
|
| 70 |
+
"managing stakeholder disagreement and escalation",
|
| 71 |
+
"building alignment across multiple teams",
|
| 72 |
+
"resolving timeline conflicts with engineering",
|
| 73 |
+
"handling customer churn threats and demands"
|
| 74 |
+
],
|
| 75 |
+
'Product Launch Strategy': [
|
| 76 |
+
"handling critical bugs close to launch",
|
| 77 |
+
"designing go-to-market for enterprise expansion",
|
| 78 |
+
"deciding on launch with mixed early metrics",
|
| 79 |
+
"adjusting strategy due to competitor timing",
|
| 80 |
+
"managing feature impact on core metrics"
|
| 81 |
+
],
|
| 82 |
+
'Competitive Analysis': [
|
| 83 |
+
"responding to well-funded direct competitor",
|
| 84 |
+
"analyzing acquisition threat in adjacent space",
|
| 85 |
+
"capitalizing on competitor retention struggles",
|
| 86 |
+
"competing against superior marketing with inferior product",
|
| 87 |
+
"defending against tech giant market entry"
|
| 88 |
+
],
|
| 89 |
+
'Feature Prioritization': [
|
| 90 |
+
"prioritizing with limited resources and high impact opportunities",
|
| 91 |
+
"balancing customer demands vs product vision",
|
| 92 |
+
"evaluating build vs buy vs partner decisions",
|
| 93 |
+
"weighing revenue features vs foundational improvements",
|
| 94 |
+
"deciding on resource allocation for new capabilities"
|
| 95 |
+
],
|
| 96 |
+
'Customer Development': [
|
| 97 |
+
"investigating low adoption of launched features",
|
| 98 |
+
"structuring customer interviews for validation",
|
| 99 |
+
"managing custom feature requests from major customers",
|
| 100 |
+
"addressing user confusion while maintaining power features",
|
| 101 |
+
"analyzing high new-user churn vs strong retention"
|
| 102 |
+
],
|
| 103 |
+
'Resume & Application Strategy': [
|
| 104 |
+
"positioning experience for fintech PM role transition",
|
| 105 |
+
"demonstrating quantified impact in interviews",
|
| 106 |
+
"addressing PM framework gaps during applications",
|
| 107 |
+
"showcasing PM skills from non-PM background",
|
| 108 |
+
"differentiating against FAANG-experienced candidates"
|
| 109 |
+
]
|
| 110 |
+
}
|
| 111 |
+
|
| 112 |
+
scenarios = base_scenarios.get(coaching_type, [
|
| 113 |
+
"solving complex product management challenge",
|
| 114 |
+
"making strategic product decision under pressure",
|
| 115 |
+
"applying PM frameworks to real-world scenario"
|
| 116 |
+
])
|
| 117 |
+
|
| 118 |
+
# Select base scenario
|
| 119 |
+
scenario_index = (question_number - 1) % len(scenarios)
|
| 120 |
+
base_scenario = scenarios[scenario_index]
|
| 121 |
+
|
| 122 |
+
# Create personalized prompt using resume context
|
| 123 |
+
if document_text and document_text.strip():
|
| 124 |
+
prompt = f"""You are an expert product management interviewer. Generate ONE realistic, challenging PM interview question that:
|
| 125 |
+
|
| 126 |
+
1. Focuses on: {base_scenario}
|
| 127 |
+
2. Is tailored to this candidate's background: {document_text[:1000]}
|
| 128 |
+
3. Coaching area: {coaching_type}
|
| 129 |
+
|
| 130 |
+
Create a specific scenario with:
|
| 131 |
+
- Realistic context (company size, industry, metrics)
|
| 132 |
+
- Clear constraints and timeline
|
| 133 |
+
- Multiple stakeholders involved
|
| 134 |
+
- Quantifiable elements (users, revenue, etc.)
|
| 135 |
+
|
| 136 |
+
Make it feel like a real PM interview question that considers their background. Keep it focused and actionable.
|
| 137 |
+
|
| 138 |
+
Example format: "You're the PM at [specific company type]. [Specific situation with numbers]. [Key challenge]. How do you approach this?"
|
| 139 |
+
|
| 140 |
+
Generate just the question scenario, no additional text:"""
|
| 141 |
+
else:
|
| 142 |
+
prompt = f"""You are an expert product management interviewer. Generate ONE realistic, challenging PM interview question about {base_scenario} in the {coaching_type} area.
|
| 143 |
+
|
| 144 |
+
Create a specific scenario with:
|
| 145 |
+
- Realistic context (company size, industry, metrics)
|
| 146 |
+
- Clear constraints and timeline
|
| 147 |
+
- Multiple stakeholders involved
|
| 148 |
+
- Quantifiable elements (users, revenue, etc.)
|
| 149 |
+
|
| 150 |
+
Example format: "You're the PM at [specific company type]. [Specific situation with numbers]. [Key challenge]. How do you approach this?"
|
| 151 |
+
|
| 152 |
+
Generate just the question scenario, no additional text:"""
|
| 153 |
+
|
| 154 |
+
try:
|
| 155 |
+
chat_completion = client.chat.completions.create(
|
| 156 |
+
messages=[{"role": "user", "content": prompt}],
|
| 157 |
+
model=MODEL,
|
| 158 |
+
temperature=0.7,
|
| 159 |
+
max_tokens=200
|
| 160 |
+
)
|
| 161 |
+
generated_question = chat_completion.choices[0].message.content.strip()
|
| 162 |
+
|
| 163 |
+
# Add context note if resume was provided
|
| 164 |
+
if document_text and document_text.strip():
|
| 165 |
+
context_note = "\n\nNote: Consider your background and experience when answering."
|
| 166 |
+
return generated_question + context_note
|
| 167 |
+
|
| 168 |
+
return generated_question
|
| 169 |
+
|
| 170 |
+
except Exception as e:
|
| 171 |
+
print(f"❌ Error generating personalized question: {e}")
|
| 172 |
+
# Fallback to basic scenario
|
| 173 |
+
fallback_questions = {
|
| 174 |
+
'Product Strategy & Vision': f"You're the PM for a mobile app with declining user engagement. How would you develop a new product vision to turn this around?",
|
| 175 |
+
'Market Research & Analysis': f"A competitor just launched a feature similar to yours. How do you conduct competitive analysis and determine your response?",
|
| 176 |
+
'Resume & Application Strategy': f"You're applying for a Senior PM role but lack direct PM experience. How do you position your background effectively?"
|
| 177 |
+
}
|
| 178 |
+
return fallback_questions.get(coaching_type, f"Walk me through how you'd approach {base_scenario} as a product manager.")
|
| 179 |
+
|
| 180 |
+
def evaluate_response(question, response, coaching_type):
|
| 181 |
+
"""Advanced product management coaching evaluation with comprehensive feedback."""
|
| 182 |
+
|
| 183 |
+
if not client:
|
| 184 |
+
return "Unable to provide feedback - API connection issue. Please check your Groq API key."
|
| 185 |
+
|
| 186 |
+
# Get relevant frameworks for this coaching type
|
| 187 |
+
suggested_frameworks = get_framework_suggestion('general', coaching_type)
|
| 188 |
+
framework_names = [fw['name'] for fw in suggested_frameworks[:2]] # Top 2 frameworks
|
| 189 |
+
|
| 190 |
+
# Get relevant metrics
|
| 191 |
+
relevant_metrics = get_relevant_metrics(coaching_type)[:3] # Top 3 metrics
|
| 192 |
+
|
| 193 |
+
# Advanced evaluation prompt with scoring criteria - requesting clean formatting
|
| 194 |
+
prompt = f"""
|
| 195 |
+
You are a world-class product management coach and former VP of Product at top tech companies.
|
| 196 |
+
Evaluate this product manager's response with the depth and insight of an experienced mentor.
|
| 197 |
+
|
| 198 |
+
SCENARIO: {question}
|
| 199 |
+
|
| 200 |
+
PM'S RESPONSE: {response}
|
| 201 |
+
|
| 202 |
+
COACHING AREA: {coaching_type}
|
| 203 |
+
|
| 204 |
+
Please provide a professional coaching evaluation using this EXACT structure (no markdown formatting, no ** or # symbols):
|
| 205 |
+
|
| 206 |
+
EVALUATION SCORES (Rate 1-10 for each):
|
| 207 |
+
• Strategic Thinking: [X]/10 - How well they approached the big picture
|
| 208 |
+
• Problem Analysis: [X]/10 - Depth of problem understanding
|
| 209 |
+
• Framework Application: [X]/10 - Use of PM methodologies
|
| 210 |
+
• Stakeholder Awareness: [X]/10 - Consideration of different perspectives
|
| 211 |
+
• Execution Focus: [X]/10 - Practicality and actionability
|
| 212 |
+
• Communication: [X]/10 - Clarity and structure of response
|
| 213 |
+
|
| 214 |
+
STRENGTHS:
|
| 215 |
+
[List 2-3 specific things they did well]
|
| 216 |
+
|
| 217 |
+
AREAS FOR GROWTH:
|
| 218 |
+
[List 2-3 specific areas for improvement]
|
| 219 |
+
|
| 220 |
+
FRAMEWORK RECOMMENDATIONS:
|
| 221 |
+
Consider applying: {', '.join(framework_names) if framework_names else 'RICE prioritization, Jobs-to-be-Done'}
|
| 222 |
+
|
| 223 |
+
KEY METRICS TO TRACK:
|
| 224 |
+
Focus on: {', '.join(relevant_metrics) if relevant_metrics else 'user engagement, business impact'}
|
| 225 |
+
|
| 226 |
+
ACTIONABLE NEXT STEPS:
|
| 227 |
+
[1-2 concrete actions they can take]
|
| 228 |
+
|
| 229 |
+
EXPERT INSIGHT:
|
| 230 |
+
[One advanced tip that a seasoned PM would know]
|
| 231 |
+
|
| 232 |
+
IMPORTANT: Use plain text formatting only - NO markdown symbols like ** or # or _. Keep it professional and readable.
|
| 233 |
+
"""
|
| 234 |
+
|
| 235 |
+
try:
|
| 236 |
+
chat_completion = client.chat.completions.create(
|
| 237 |
+
messages=[
|
| 238 |
+
{"role": "system", "content": "You are an expert product management coach with 15+ years of experience. Provide detailed, actionable feedback using clean, professional formatting without markdown symbols."},
|
| 239 |
+
{"role": "user", "content": prompt}
|
| 240 |
+
],
|
| 241 |
+
model=MODEL,
|
| 242 |
+
temperature=0.7, # Slightly creative but consistent
|
| 243 |
+
max_tokens=1000 # Allow for comprehensive feedback
|
| 244 |
+
)
|
| 245 |
+
return chat_completion.choices[0].message.content
|
| 246 |
+
|
| 247 |
+
except Exception as e:
|
| 248 |
+
# Fallback response with framework suggestions
|
| 249 |
+
framework_text = f" Consider applying {framework_names[0]} framework" if framework_names else ""
|
| 250 |
+
metric_text = f" Focus on metrics like {relevant_metrics[0]}" if relevant_metrics else ""
|
| 251 |
+
|
| 252 |
+
return f"""
|
| 253 |
+
FEEDBACK: Great thinking on this scenario!{framework_text}.{metric_text}
|
| 254 |
+
|
| 255 |
+
STRENGTHS: You showed good product thinking and approached the problem systematically.
|
| 256 |
+
|
| 257 |
+
AREAS FOR GROWTH: Consider exploring more PM frameworks and being more specific about metrics and success criteria.
|
| 258 |
+
|
| 259 |
+
NEXT STEPS: Practice more scenarios in {coaching_type} and study relevant PM frameworks.
|
| 260 |
+
|
| 261 |
+
Note: Full AI evaluation unavailable - {str(e)}
|
| 262 |
+
"""
|
| 263 |
+
|
| 264 |
+
def generate_coaching_feedback(coaching_log, coaching_type, name):
|
| 265 |
+
"""Generate overall coaching feedback based on the entire session."""
|
| 266 |
+
|
| 267 |
+
responses_summary = "\n".join([
|
| 268 |
+
f"Scenario: {item['question'][:100]}...\nResponse: {item['response'][:200]}...\n"
|
| 269 |
+
for item in coaching_log
|
| 270 |
+
])
|
| 271 |
+
|
| 272 |
+
prompt = f"""
|
| 273 |
+
You are an expert product management coach providing a comprehensive development summary for {name} who just completed a coaching session on {coaching_type}.
|
| 274 |
+
|
| 275 |
+
SESSION SUMMARY:
|
| 276 |
+
{responses_summary}
|
| 277 |
+
|
| 278 |
+
Please provide a comprehensive coaching summary that includes:
|
| 279 |
+
|
| 280 |
+
1. **Overall Performance**: Key strengths demonstrated
|
| 281 |
+
2. **Growth Areas**: 3 specific areas for continued development
|
| 282 |
+
3. **Recommended Learning**: Books, frameworks, or practices to explore
|
| 283 |
+
4. **Action Plan**: Concrete next steps for skill development
|
| 284 |
+
5. **Encouragement**: Motivational closing focused on their PM journey
|
| 285 |
+
|
| 286 |
+
Make it personal, actionable, and encouraging for a product manager's growth.
|
| 287 |
+
"""
|
| 288 |
+
|
| 289 |
+
try:
|
| 290 |
+
chat_completion = client.chat.completions.create(
|
| 291 |
+
messages=[{"role": "user", "content": prompt}],
|
| 292 |
+
model=MODEL,
|
| 293 |
+
)
|
| 294 |
+
return chat_completion.choices[0].message.content
|
| 295 |
+
except Exception as e:
|
| 296 |
+
return f"Excellent work in this {coaching_type} coaching session, {name}! Continue practicing these scenarios and exploring product management frameworks to enhance your skills."
|
| 297 |
+
|
| 298 |
+
# Legacy function for backward compatibility
|
| 299 |
+
def evaluate_answer(question, answer):
|
| 300 |
+
"""Legacy function - redirects to evaluate_response for coaching context."""
|
| 301 |
+
return evaluate_response(question, answer, "General Product Management")
|
| 302 |
+
|
| 303 |
+
def parse_scores_from_evaluation(evaluation_text: str) -> dict:
|
| 304 |
+
"""Extract numerical scores from the advanced evaluation text."""
|
| 305 |
+
scores = {
|
| 306 |
+
'Strategic Thinking': 0,
|
| 307 |
+
'Problem Analysis': 0,
|
| 308 |
+
'Framework Application': 0,
|
| 309 |
+
'Stakeholder Awareness': 0,
|
| 310 |
+
'Execution Focus': 0,
|
| 311 |
+
'Communication': 0
|
| 312 |
+
}
|
| 313 |
+
|
| 314 |
+
# Updated regex patterns for the new scoring format
|
| 315 |
+
patterns = {
|
| 316 |
+
'Strategic Thinking': r'Strategic Thinking:\s*(\d+)/10',
|
| 317 |
+
'Problem Analysis': r'Problem Analysis:\s*(\d+)/10',
|
| 318 |
+
'Framework Application': r'Framework Application:\s*(\d+)/10',
|
| 319 |
+
'Stakeholder Awareness': r'Stakeholder Awareness:\s*(\d+)/10',
|
| 320 |
+
'Execution Focus': r'Execution Focus:\s*(\d+)/10',
|
| 321 |
+
'Communication': r'Communication:\s*(\d+)/10'
|
| 322 |
+
}
|
| 323 |
+
|
| 324 |
+
for category, pattern in patterns.items():
|
| 325 |
+
match = re.search(pattern, evaluation_text, re.IGNORECASE)
|
| 326 |
+
if match:
|
| 327 |
+
try:
|
| 328 |
+
scores[category] = int(match.group(1))
|
| 329 |
+
except (ValueError, IndexError):
|
| 330 |
+
scores[category] = 7 # Default score if parsing fails
|
| 331 |
+
|
| 332 |
+
return scores
|
| 333 |
+
|
| 334 |
+
def get_overall_score(scores_dict: dict) -> float:
|
| 335 |
+
"""Calculate overall score from individual category scores."""
|
| 336 |
+
if not scores_dict or not any(scores_dict.values()):
|
| 337 |
+
return 7.0 # Default score
|
| 338 |
+
|
| 339 |
+
total_score = sum(scores_dict.values())
|
| 340 |
+
max_possible = len(scores_dict) * 10
|
| 341 |
+
return round((total_score / max_possible) * 10, 1)
|
| 342 |
+
|
| 343 |
+
def get_score_interpretation(overall_score: float) -> str:
|
| 344 |
+
"""Provide interpretation of the overall score."""
|
| 345 |
+
if overall_score >= 9.0:
|
| 346 |
+
return "🌟 Exceptional - You demonstrated expert-level product management thinking!"
|
| 347 |
+
elif overall_score >= 8.0:
|
| 348 |
+
return "🚀 Excellent - Strong PM skills with minor areas for refinement"
|
| 349 |
+
elif overall_score >= 7.0:
|
| 350 |
+
return "✅ Good - Solid foundation with clear growth opportunities"
|
| 351 |
+
elif overall_score >= 6.0:
|
| 352 |
+
return "📈 Developing - Good start, focus on applying more frameworks"
|
| 353 |
+
elif overall_score >= 5.0:
|
| 354 |
+
return "💪 Building - Keep practicing, you're on the right track"
|
| 355 |
+
else:
|
| 356 |
+
return "🎯 Learning - Focus on fundamentals and PM best practices"
|
| 357 |
+
scores = {
|
| 358 |
+
'Factual Accuracy': 0,
|
| 359 |
+
'Relevance & Directness': 0,
|
| 360 |
+
'Structure & Clarity': 0
|
| 361 |
+
}
|
| 362 |
+
pattern = r"(Factual Accuracy|Relevance & Directness|Structure & Clarity):\s*\[?(\d{1,2})\]?\/10"
|
| 363 |
+
matches = re.findall(pattern, evaluation_text, re.IGNORECASE)
|
| 364 |
+
|
| 365 |
+
for match in matches:
|
| 366 |
+
category_name, score_value = match[0].strip(), int(match[1])
|
| 367 |
+
if category_name in scores:
|
| 368 |
+
scores[category_name] = score_value
|
| 369 |
+
|
| 370 |
+
print(f"📊 Parsed scores: {scores}")
|
| 371 |
+
return scores
|
| 372 |
+
|
| 373 |
+
def generate_holistic_feedback(full_interview_log):
|
| 374 |
+
prompt = f"""
|
| 375 |
+
You are a senior interview coach. Based on the entire Q&A log, provide a high-level "Overall Performance Summary" and an "Actionable Improvement Plan".
|
| 376 |
+
**FULL INTERVIEW LOG:** --- {full_interview_log} ---
|
| 377 |
+
"""
|
| 378 |
+
try:
|
| 379 |
+
chat_completion = client.chat.completions.create(
|
| 380 |
+
messages=[{"role": "user", "content": prompt}],
|
| 381 |
+
model=MODEL,
|
| 382 |
+
)
|
| 383 |
+
return chat_completion.choices[0].message.content
|
| 384 |
+
except Exception as e:
|
| 385 |
+
return "Could not generate holistic feedback due to an error."
|
modules/pm_frameworks.py
ADDED
|
@@ -0,0 +1,161 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# modules/pm_frameworks.py
|
| 2 |
+
"""
|
| 3 |
+
Product Management Frame
|
| 4 |
+
'MOSCOW': {
|
| 5 |
+
'name': 'MoSCoW Prioritization',
|
| 6 |
+
'description': 'Must have, Should have, Could have, Won\'t have',
|
| 7 |
+
'use_case': 'Requirements prioritization',
|
| 8 |
+
'categories': ['Must have', 'Should have', 'Could have', "Won't have (this time)"]
|
| 9 |
+
},
|
| 10 |
+
|
| 11 |
+
'STAR_RESUME': {
|
| 12 |
+
'name': 'STAR Method for Resume',
|
| 13 |
+
'description': 'Situation, Task, Action, Result - Structure for resume bullet points',
|
| 14 |
+
'use_case': 'Writing impactful resume achievements',
|
| 15 |
+
'structure': 'Action verb + Context + Impact with metrics'
|
| 16 |
+
},
|
| 17 |
+
|
| 18 |
+
'RESUME_OPTIMIZATION': {
|
| 19 |
+
'name': 'PM Resume Optimization Framework',
|
| 20 |
+
'description': 'Key elements for product management resume success',
|
| 21 |
+
'use_case': 'Creating compelling PM applications',
|
| 22 |
+
'components': ['Quantified Impact', 'PM Keywords', 'Framework Application', 'Leadership Examples', 'Technical Skills', 'Stakeholder Management']
|
| 23 |
+
},
|
| 24 |
+
|
| 25 |
+
'APPLICATION_FUNNEL': {
|
| 26 |
+
'name': 'Application Success Funnel',
|
| 27 |
+
'description': 'Systematic approach to PM job applications',
|
| 28 |
+
'use_case': 'Increasing interview conversion rates',
|
| 29 |
+
'stages': ['Research & Targeting', 'Resume Tailoring', 'Cover Letter', 'Portfolio/Case Studies', 'Network Activation', 'Follow-up']
|
| 30 |
+
}
|
| 31 |
+
}d Best Practices
|
| 32 |
+
This module contains common PM frameworks that can be referenced in coaching sessions.
|
| 33 |
+
"""
|
| 34 |
+
|
| 35 |
+
PM_FRAMEWORKS = {
|
| 36 |
+
'RICE': {
|
| 37 |
+
'name': 'RICE Prioritization',
|
| 38 |
+
'description': 'Reach, Impact, Confidence, Effort - A framework for feature prioritization',
|
| 39 |
+
'use_case': 'Prioritizing features or initiatives',
|
| 40 |
+
'formula': 'Score = (Reach × Impact × Confidence) / Effort'
|
| 41 |
+
},
|
| 42 |
+
|
| 43 |
+
'CIRCLES': {
|
| 44 |
+
'name': 'CIRCLES Method',
|
| 45 |
+
'description': 'Comprehend, Identify, Report, Cut, List, Evaluate, Summarize',
|
| 46 |
+
'use_case': 'Product design interviews and product thinking',
|
| 47 |
+
'steps': ['Comprehend the situation', 'Identify the customer', 'Report needs', 'Cut through prioritization', 'List solutions', 'Evaluate trade-offs', 'Summarize recommendations']
|
| 48 |
+
},
|
| 49 |
+
|
| 50 |
+
'JOBS_TO_BE_DONE': {
|
| 51 |
+
'name': 'Jobs-to-be-Done',
|
| 52 |
+
'description': 'Focus on what customers hire your product to do',
|
| 53 |
+
'use_case': 'Understanding customer needs and product-market fit',
|
| 54 |
+
'statement': 'When I [situation], I want to [motivation], so I can [expected outcome]'
|
| 55 |
+
},
|
| 56 |
+
|
| 57 |
+
'KANO_MODEL': {
|
| 58 |
+
'name': 'Kano Model',
|
| 59 |
+
'description': 'Categorizes features into Must-have, Performance, and Delight',
|
| 60 |
+
'use_case': 'Feature planning and customer satisfaction',
|
| 61 |
+
'categories': ['Must-have (Basic)', 'Performance (Linear)', 'Delight (Attractive)']
|
| 62 |
+
},
|
| 63 |
+
|
| 64 |
+
'LEAN_CANVAS': {
|
| 65 |
+
'name': 'Lean Canvas',
|
| 66 |
+
'description': '1-page business model focused on problems, solutions, and key metrics',
|
| 67 |
+
'use_case': 'Product strategy and business model validation',
|
| 68 |
+
'sections': ['Problem', 'Solution', 'Key Metrics', 'Unique Value Prop', 'Unfair Advantage', 'Channels', 'Customer Segments', 'Cost Structure', 'Revenue Streams']
|
| 69 |
+
},
|
| 70 |
+
|
| 71 |
+
'NORTH_STAR': {
|
| 72 |
+
'name': 'North Star Framework',
|
| 73 |
+
'description': 'A single metric that captures the core value your product delivers',
|
| 74 |
+
'use_case': 'Product strategy and team alignment',
|
| 75 |
+
'components': ['North Star Metric', 'Input Metrics', 'Work Streams']
|
| 76 |
+
},
|
| 77 |
+
|
| 78 |
+
'DESIGN_SPRINT': {
|
| 79 |
+
'name': 'Design Sprint',
|
| 80 |
+
'description': '5-day process for answering critical business questions through design',
|
| 81 |
+
'use_case': 'Rapid prototyping and validation',
|
| 82 |
+
'phases': ['Monday: Map', 'Tuesday: Sketch', 'Wednesday: Decide', 'Thursday: Prototype', 'Friday: Test']
|
| 83 |
+
},
|
| 84 |
+
|
| 85 |
+
'MOSCOW': {
|
| 86 |
+
'name': 'MoSCoW Prioritization',
|
| 87 |
+
'description': 'Must have, Should have, Could have, Won\'t have',
|
| 88 |
+
'use_case': 'Requirements prioritization',
|
| 89 |
+
'categories': ['Must have', 'Should have', 'Could have', 'Won\'t have']
|
| 90 |
+
}
|
| 91 |
+
}
|
| 92 |
+
|
| 93 |
+
PRODUCT_METRICS = {
|
| 94 |
+
'ENGAGEMENT': ['DAU/MAU', 'Session Duration', 'Feature Adoption', 'Stickiness'],
|
| 95 |
+
'GROWTH': ['CAC', 'LTV', 'Viral Coefficient', 'Retention Rate'],
|
| 96 |
+
'BUSINESS': ['Revenue', 'Conversion Rate', 'Churn Rate', 'NPS'],
|
| 97 |
+
'PRODUCT': ['Time to Value', 'Feature Usage', 'User Satisfaction', 'Task Success Rate']
|
| 98 |
+
}
|
| 99 |
+
|
| 100 |
+
COACHING_SCENARIOS = {
|
| 101 |
+
'PRIORITIZATION': [
|
| 102 |
+
"You have 5 high-impact features but can only build 2 this quarter. How do you decide?",
|
| 103 |
+
"Engineering is pushing back on your roadmap timeline. How do you respond?",
|
| 104 |
+
"A key customer is threatening to churn unless you build their feature request."
|
| 105 |
+
],
|
| 106 |
+
|
| 107 |
+
'STAKEHOLDER_MANAGEMENT': [
|
| 108 |
+
"Sales wants feature A, marketing wants feature B, and engineering prefers feature C.",
|
| 109 |
+
"The CEO wants to add a major feature that doesn't align with your product vision.",
|
| 110 |
+
"You need to communicate a roadmap delay to frustrated stakeholders."
|
| 111 |
+
],
|
| 112 |
+
|
| 113 |
+
'METRICS_ANALYSIS': [
|
| 114 |
+
"Your key engagement metric dropped 20% last month. How do you investigate?",
|
| 115 |
+
"You need to prove ROI for a new feature launch to executives.",
|
| 116 |
+
"Two A/B tests show conflicting results. How do you make a decision?"
|
| 117 |
+
],
|
| 118 |
+
|
| 119 |
+
'USER_RESEARCH': [
|
| 120 |
+
"You're seeing low adoption of a recently launched feature.",
|
| 121 |
+
"Users are requesting a feature that conflicts with your product vision.",
|
| 122 |
+
"You need to validate a new product concept with limited research budget."
|
| 123 |
+
]
|
| 124 |
+
}
|
| 125 |
+
|
| 126 |
+
def get_framework_suggestion(scenario_type, coaching_type):
|
| 127 |
+
"""Suggest relevant frameworks based on the coaching scenario."""
|
| 128 |
+
framework_mapping = {
|
| 129 |
+
'Product Strategy & Vision': ['LEAN_CANVAS', 'NORTH_STAR', 'JOBS_TO_BE_DONE'],
|
| 130 |
+
'Market Research & Analysis': ['JOBS_TO_BE_DONE', 'KANO_MODEL'],
|
| 131 |
+
'User Experience & Design Thinking': ['DESIGN_SPRINT', 'JOBS_TO_BE_DONE', 'KANO_MODEL'],
|
| 132 |
+
'Product Roadmap Planning': ['RICE', 'MOSCOW', 'KANO_MODEL'],
|
| 133 |
+
'Metrics & Analytics': ['NORTH_STAR', 'KANO_MODEL'],
|
| 134 |
+
'Stakeholder Management': ['MOSCOW', 'LEAN_CANVAS'],
|
| 135 |
+
'Product Launch Strategy': ['DESIGN_SPRINT', 'LEAN_CANVAS'],
|
| 136 |
+
'Competitive Analysis': ['LEAN_CANVAS', 'KANO_MODEL'],
|
| 137 |
+
'Feature Prioritization': ['RICE', 'MOSCOW', 'KANO_MODEL'],
|
| 138 |
+
'Customer Development': ['JOBS_TO_BE_DONE', 'DESIGN_SPRINT'],
|
| 139 |
+
'Resume & Application Strategy': ['STAR_RESUME', 'RESUME_OPTIMIZATION', 'APPLICATION_FUNNEL']
|
| 140 |
+
}
|
| 141 |
+
|
| 142 |
+
suggested_frameworks = framework_mapping.get(coaching_type, ['RICE', 'JOBS_TO_BE_DONE'])
|
| 143 |
+
return [PM_FRAMEWORKS[fw] for fw in suggested_frameworks if fw in PM_FRAMEWORKS]
|
| 144 |
+
|
| 145 |
+
def get_relevant_metrics(coaching_type):
|
| 146 |
+
"""Get relevant metrics for the coaching area."""
|
| 147 |
+
metric_mapping = {
|
| 148 |
+
'Product Strategy & Vision': PRODUCT_METRICS['BUSINESS'] + PRODUCT_METRICS['ENGAGEMENT'],
|
| 149 |
+
'Market Research & Analysis': PRODUCT_METRICS['GROWTH'],
|
| 150 |
+
'User Experience & Design Thinking': PRODUCT_METRICS['PRODUCT'] + PRODUCT_METRICS['ENGAGEMENT'],
|
| 151 |
+
'Product Roadmap Planning': PRODUCT_METRICS['PRODUCT'],
|
| 152 |
+
'Metrics & Analytics': list(PRODUCT_METRICS.values()),
|
| 153 |
+
'Stakeholder Management': PRODUCT_METRICS['BUSINESS'],
|
| 154 |
+
'Product Launch Strategy': PRODUCT_METRICS['GROWTH'] + PRODUCT_METRICS['BUSINESS'],
|
| 155 |
+
'Competitive Analysis': PRODUCT_METRICS['GROWTH'],
|
| 156 |
+
'Feature Prioritization': PRODUCT_METRICS['PRODUCT'],
|
| 157 |
+
'Customer Development': PRODUCT_METRICS['ENGAGEMENT'] + PRODUCT_METRICS['PRODUCT'],
|
| 158 |
+
'Resume & Application Strategy': ['Application Response Rate', 'Interview Conversion Rate', 'Resume ATS Score', 'Portfolio View Rate', 'Network Response Rate']
|
| 159 |
+
}
|
| 160 |
+
|
| 161 |
+
return metric_mapping.get(coaching_type, PRODUCT_METRICS['PRODUCT'])
|
modules/report_generator.py
ADDED
|
@@ -0,0 +1,443 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# modules/report_generator.py
|
| 2 |
+
import datetime
|
| 3 |
+
import os
|
| 4 |
+
import numpy as np
|
| 5 |
+
import matplotlib.pyplot as plt
|
| 6 |
+
from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer, PageBreak, Image, HRFlowable
|
| 7 |
+
from reportlab.lib.styles import getSampleStyleSheet, ParagraphStyle
|
| 8 |
+
from reportlab.lib.enums import TA_JUSTIFY, TA_CENTER
|
| 9 |
+
from reportlab.lib.units import inch
|
| 10 |
+
from reportlab.lib import colors
|
| 11 |
+
from modules.llm_handler import generate_coaching_feedback
|
| 12 |
+
import config
|
| 13 |
+
import tempfile
|
| 14 |
+
|
| 15 |
+
def define_skill_areas(coaching_type):
|
| 16 |
+
"""Define key skill areas based on product management coaching type."""
|
| 17 |
+
skill_mapping = {
|
| 18 |
+
'Product Strategy & Vision': ['Strategic Thinking', 'Vision Articulation', 'Business Alignment'],
|
| 19 |
+
'Market Research & Analysis': ['Research Skills', 'Data Analysis', 'Market Understanding'],
|
| 20 |
+
'User Experience & Design Thinking': ['User Empathy', 'Design Process', 'Problem Solving'],
|
| 21 |
+
'Product Roadmap Planning': ['Prioritization', 'Planning', 'Communication'],
|
| 22 |
+
'Metrics & Analytics': ['Data Literacy', 'Analytical Thinking', 'Decision Making'],
|
| 23 |
+
'Stakeholder Management': ['Communication', 'Negotiation', 'Relationship Building'],
|
| 24 |
+
'Product Launch Strategy': ['Execution', 'Planning', 'Cross-functional Leadership'],
|
| 25 |
+
'Competitive Analysis': ['Market Analysis', 'Strategic Thinking', 'Opportunity Recognition'],
|
| 26 |
+
'Feature Prioritization': ['Decision Making', 'Framework Application', 'Trade-off Analysis'],
|
| 27 |
+
'Customer Development': ['Customer Empathy', 'Research Skills', 'Insight Generation'],
|
| 28 |
+
'Resume & Application Strategy': ['Application Skills', 'Personal Branding', 'Interview Preparation']
|
| 29 |
+
}
|
| 30 |
+
return skill_mapping.get(coaching_type, ['Strategic Thinking', 'Problem Solving', 'Communication'])
|
| 31 |
+
|
| 32 |
+
def create_coaching_progress_chart(labels, file_path):
|
| 33 |
+
"""Create a visual representation of coaching areas covered with error handling."""
|
| 34 |
+
try:
|
| 35 |
+
plt.clf() # Clear any existing plots
|
| 36 |
+
fig, ax = plt.subplots(figsize=(8, 6))
|
| 37 |
+
y_pos = np.arange(len(labels))
|
| 38 |
+
|
| 39 |
+
# Create a progress-style chart
|
| 40 |
+
progress_values = [100] * len(labels) # All areas were covered
|
| 41 |
+
colors_list = plt.cm.Blues(np.linspace(0.4, 0.8, len(labels)))
|
| 42 |
+
|
| 43 |
+
bars = ax.barh(y_pos, progress_values, color=colors_list, alpha=0.7)
|
| 44 |
+
|
| 45 |
+
ax.set_yticks(y_pos)
|
| 46 |
+
ax.set_yticklabels(labels)
|
| 47 |
+
ax.set_xlabel('Coaching Coverage (%)')
|
| 48 |
+
ax.set_title('Product Management Skills Coaching Session', fontsize=14, fontweight='bold')
|
| 49 |
+
ax.set_xlim(0, 100)
|
| 50 |
+
|
| 51 |
+
# Add checkmarks to indicate completion
|
| 52 |
+
for i, bar in enumerate(bars):
|
| 53 |
+
ax.text(bar.get_width() - 10, bar.get_y() + bar.get_height()/2,
|
| 54 |
+
'✓', ha='center', va='center', fontsize=16, color='white', fontweight='bold')
|
| 55 |
+
|
| 56 |
+
plt.tight_layout()
|
| 57 |
+
plt.savefig(file_path, dpi=300, bbox_inches='tight')
|
| 58 |
+
plt.close(fig) # Explicitly close the figure
|
| 59 |
+
print(f"✅ Chart saved to: {file_path}")
|
| 60 |
+
return True
|
| 61 |
+
|
| 62 |
+
except Exception as e:
|
| 63 |
+
print(f"❌ Error creating chart: {e}")
|
| 64 |
+
return False
|
| 65 |
+
|
| 66 |
+
plt.tight_layout()
|
| 67 |
+
plt.savefig(file_path, dpi=300, bbox_inches='tight')
|
| 68 |
+
plt.close(fig)
|
| 69 |
+
print(f"📈 Coaching progress chart saved to {file_path}")
|
| 70 |
+
|
| 71 |
+
def clean_text_for_pdf(text):
|
| 72 |
+
"""Remove markdown formatting and clean text for professional PDF appearance."""
|
| 73 |
+
import re
|
| 74 |
+
|
| 75 |
+
# Remove markdown bold/italic formatting - handle nested patterns
|
| 76 |
+
text = re.sub(r'\*\*\*(.*?)\*\*\*', r'\1', text) # Remove ***bold italic***
|
| 77 |
+
text = re.sub(r'\*\*(.*?)\*\*', r'\1', text) # Remove **bold**
|
| 78 |
+
text = re.sub(r'\*(.*?)\*', r'\1', text) # Remove *italic*
|
| 79 |
+
text = re.sub(r'__(.*?)__', r'\1', text) # Remove __bold__
|
| 80 |
+
text = re.sub(r'_(.*?)_', r'\1', text) # Remove _italic_
|
| 81 |
+
|
| 82 |
+
# Clean up section headers (remove markdown)
|
| 83 |
+
text = re.sub(r'#{1,6}\s*', '', text) # Remove # headers
|
| 84 |
+
text = re.sub(r'\*\*([A-Z\s]+[A-Za-z]):\*\*', r'\1:', text) # Clean section headers with colons
|
| 85 |
+
text = re.sub(r'\*\*([A-Z][A-Za-z\s]*)\*\*', r'\1', text) # Clean other bold headers
|
| 86 |
+
|
| 87 |
+
# Remove markdown links
|
| 88 |
+
text = re.sub(r'\[([^\]]+)\]\([^\)]+\)', r'\1', text)
|
| 89 |
+
|
| 90 |
+
# Clean up bullet points and list markers
|
| 91 |
+
text = re.sub(r'^\s*[-\*\+•]\s+', '• ', text, flags=re.MULTILINE)
|
| 92 |
+
text = re.sub(r'^\s*\d+\.\s+', '• ', text, flags=re.MULTILINE) # Convert numbered lists
|
| 93 |
+
|
| 94 |
+
# Remove extra asterisks that might be left over
|
| 95 |
+
text = re.sub(r'\*+', '', text)
|
| 96 |
+
text = re.sub(r'_+', '', text)
|
| 97 |
+
|
| 98 |
+
# Clean up multiple spaces and normalize line breaks
|
| 99 |
+
text = re.sub(r'\n\s*\n\s*\n+', '\n\n', text)
|
| 100 |
+
text = re.sub(r'\s+', ' ', text)
|
| 101 |
+
text = text.strip()
|
| 102 |
+
|
| 103 |
+
return text
|
| 104 |
+
|
| 105 |
+
def format_feedback_sections(feedback_text):
|
| 106 |
+
"""Break feedback into structured sections for better visual presentation."""
|
| 107 |
+
# First, clean the text of markdown formatting
|
| 108 |
+
cleaned_text = clean_text_for_pdf(feedback_text)
|
| 109 |
+
|
| 110 |
+
sections = []
|
| 111 |
+
current_section = ""
|
| 112 |
+
current_title = ""
|
| 113 |
+
|
| 114 |
+
lines = cleaned_text.split('\n')
|
| 115 |
+
|
| 116 |
+
for line in lines:
|
| 117 |
+
line = line.strip()
|
| 118 |
+
if not line:
|
| 119 |
+
continue
|
| 120 |
+
|
| 121 |
+
# Check if line is a section header (contains keywords)
|
| 122 |
+
if any(keyword in line.upper() for keyword in ['STRENGTHS:', 'AREAS FOR GROWTH:', 'FRAMEWORK', 'METRICS', 'NEXT STEPS:', 'INSIGHT:', 'SCORES:', 'KEY STRENGTHS', 'RECOMMENDATIONS']):
|
| 123 |
+
# Save previous section
|
| 124 |
+
if current_title and current_section:
|
| 125 |
+
sections.append({
|
| 126 |
+
'title': current_title,
|
| 127 |
+
'content': current_section.strip()
|
| 128 |
+
})
|
| 129 |
+
|
| 130 |
+
# Start new section
|
| 131 |
+
current_title = line.replace(':', '').strip()
|
| 132 |
+
current_section = ""
|
| 133 |
+
else:
|
| 134 |
+
# Add to current section
|
| 135 |
+
current_section += line + " "
|
| 136 |
+
|
| 137 |
+
# Add final section
|
| 138 |
+
if current_title and current_section:
|
| 139 |
+
sections.append({
|
| 140 |
+
'title': current_title,
|
| 141 |
+
'content': current_section.strip()
|
| 142 |
+
})
|
| 143 |
+
|
| 144 |
+
return sections
|
| 145 |
+
|
| 146 |
+
def generate_pdf_report(coaching_data, file_path):
|
| 147 |
+
"""Generate a professional product management coaching report without markdown formatting."""
|
| 148 |
+
try:
|
| 149 |
+
doc = SimpleDocTemplate(file_path, pagesize=(8.5 * inch, 11 * inch))
|
| 150 |
+
styles = getSampleStyleSheet()
|
| 151 |
+
|
| 152 |
+
# Professional styles for coaching report - only add if they don't exist
|
| 153 |
+
def safe_add_style(styles, name, **kwargs):
|
| 154 |
+
"""Safely add a style only if it doesn't already exist"""
|
| 155 |
+
if name not in styles:
|
| 156 |
+
styles.add(ParagraphStyle(name=name, **kwargs))
|
| 157 |
+
|
| 158 |
+
safe_add_style(styles, 'ReportTitle',
|
| 159 |
+
parent=styles['Heading1'],
|
| 160 |
+
fontSize=24,
|
| 161 |
+
alignment=TA_CENTER,
|
| 162 |
+
spaceAfter=20,
|
| 163 |
+
textColor=colors.darkblue,
|
| 164 |
+
fontName='Helvetica-Bold'
|
| 165 |
+
)
|
| 166 |
+
|
| 167 |
+
safe_add_style(styles, 'SectionHeader',
|
| 168 |
+
parent=styles['Heading2'],
|
| 169 |
+
fontSize=16,
|
| 170 |
+
spaceBefore=15,
|
| 171 |
+
spaceAfter=10,
|
| 172 |
+
textColor=colors.darkblue,
|
| 173 |
+
fontName='Helvetica-Bold'
|
| 174 |
+
)
|
| 175 |
+
|
| 176 |
+
safe_add_style(styles, 'SubHeader',
|
| 177 |
+
parent=styles['Heading3'],
|
| 178 |
+
fontSize=14,
|
| 179 |
+
spaceBefore=12,
|
| 180 |
+
spaceAfter=8,
|
| 181 |
+
textColor=colors.darkgreen,
|
| 182 |
+
fontName='Helvetica-Bold'
|
| 183 |
+
)
|
| 184 |
+
|
| 185 |
+
safe_add_style(styles, 'BodyText',
|
| 186 |
+
parent=styles['Normal'],
|
| 187 |
+
fontSize=11,
|
| 188 |
+
spaceAfter=12,
|
| 189 |
+
spaceBefore=6,
|
| 190 |
+
alignment=TA_JUSTIFY,
|
| 191 |
+
fontName='Helvetica'
|
| 192 |
+
)
|
| 193 |
+
|
| 194 |
+
safe_add_style(styles, 'BulletText',
|
| 195 |
+
parent=styles['Normal'],
|
| 196 |
+
fontSize=11,
|
| 197 |
+
spaceAfter=8,
|
| 198 |
+
spaceBefore=4,
|
| 199 |
+
leftIndent=20,
|
| 200 |
+
fontName='Helvetica'
|
| 201 |
+
)
|
| 202 |
+
|
| 203 |
+
safe_add_style(styles, 'BulletPoint',
|
| 204 |
+
parent=styles['Normal'],
|
| 205 |
+
fontSize=11,
|
| 206 |
+
spaceAfter=6,
|
| 207 |
+
spaceBefore=3,
|
| 208 |
+
leftIndent=20,
|
| 209 |
+
fontName='Helvetica'
|
| 210 |
+
)
|
| 211 |
+
|
| 212 |
+
safe_add_style(styles, 'ScoreText',
|
| 213 |
+
parent=styles['Normal'],
|
| 214 |
+
fontSize=12,
|
| 215 |
+
spaceAfter=8,
|
| 216 |
+
spaceBefore=4,
|
| 217 |
+
textColor=colors.blue,
|
| 218 |
+
fontName='Helvetica-Bold'
|
| 219 |
+
)
|
| 220 |
+
|
| 221 |
+
safe_add_style(styles, 'HighlightBox',
|
| 222 |
+
parent=styles['Normal'],
|
| 223 |
+
fontSize=11,
|
| 224 |
+
spaceAfter=12,
|
| 225 |
+
spaceBefore=12,
|
| 226 |
+
leftIndent=15,
|
| 227 |
+
rightIndent=15,
|
| 228 |
+
borderWidth=1,
|
| 229 |
+
borderColor=colors.lightgrey,
|
| 230 |
+
backColor=colors.lightgrey,
|
| 231 |
+
fontName='Helvetica'
|
| 232 |
+
)
|
| 233 |
+
|
| 234 |
+
print("✅ Stylesheet created successfully")
|
| 235 |
+
|
| 236 |
+
story = []
|
| 237 |
+
|
| 238 |
+
# Title Page
|
| 239 |
+
story.append(Paragraph("Personal AI Product Coach", styles['ReportTitle']))
|
| 240 |
+
story.append(Paragraph("Product Management Coaching Report", styles['SectionHeader']))
|
| 241 |
+
story.append(Spacer(1, 0.5 * inch))
|
| 242 |
+
|
| 243 |
+
# Executive Summary Table
|
| 244 |
+
story.append(Paragraph("Executive Summary", styles['SectionHeader']))
|
| 245 |
+
story.append(Paragraph(f"Participant: {coaching_data.get('name', 'Product Manager')}", styles['BodyText']))
|
| 246 |
+
story.append(Paragraph(f"Coaching Focus: {coaching_data.get('type', 'General PM Coaching')}", styles['BodyText']))
|
| 247 |
+
story.append(Paragraph(f"Session Date: {datetime.datetime.now().strftime('%B %d, %Y')}", styles['BodyText']))
|
| 248 |
+
story.append(Paragraph(f"Scenarios Completed: {len(coaching_data.get('q_and_a', []))}", styles['BodyText']))
|
| 249 |
+
|
| 250 |
+
# Calculate overall session performance
|
| 251 |
+
session_scores = []
|
| 252 |
+
for scenario in coaching_data.get('q_and_a', []):
|
| 253 |
+
if 'overall_score' in scenario and scenario['overall_score'] > 0:
|
| 254 |
+
session_scores.append(scenario['overall_score'])
|
| 255 |
+
|
| 256 |
+
if session_scores:
|
| 257 |
+
avg_score = sum(session_scores) / len(session_scores)
|
| 258 |
+
story.append(Paragraph(f"Session Average Score: {avg_score:.1f}/10", styles['ScoreText']))
|
| 259 |
+
|
| 260 |
+
story.append(PageBreak())
|
| 261 |
+
|
| 262 |
+
# Overall Performance Analysis
|
| 263 |
+
story.append(Paragraph("Overall Performance Analysis", styles['SectionHeader']))
|
| 264 |
+
|
| 265 |
+
# Generate comprehensive feedback with error handling
|
| 266 |
+
try:
|
| 267 |
+
overall_feedback = generate_coaching_feedback(coaching_data.get('q_and_a', []),
|
| 268 |
+
coaching_data.get('type', 'General'),
|
| 269 |
+
coaching_data.get('name', 'Product Manager'))
|
| 270 |
+
|
| 271 |
+
# Format the feedback into structured sections
|
| 272 |
+
feedback_sections = format_feedback_sections(overall_feedback)
|
| 273 |
+
|
| 274 |
+
if feedback_sections:
|
| 275 |
+
for section in feedback_sections:
|
| 276 |
+
# Add section title with proper styling
|
| 277 |
+
story.append(Paragraph(section['title'], styles['SubHeader']))
|
| 278 |
+
story.append(Spacer(1, 0.1 * inch))
|
| 279 |
+
|
| 280 |
+
# Split content into bullet points if it contains bullet indicators
|
| 281 |
+
content = section['content']
|
| 282 |
+
if '•' in content:
|
| 283 |
+
# Handle bullet points
|
| 284 |
+
bullets = [line.strip() for line in content.split('•') if line.strip()]
|
| 285 |
+
for bullet in bullets:
|
| 286 |
+
if bullet:
|
| 287 |
+
story.append(Paragraph(f"• {bullet}", styles['BulletPoint']))
|
| 288 |
+
else:
|
| 289 |
+
# Handle regular paragraphs
|
| 290 |
+
paragraphs = [p.strip() for p in content.split('.') if p.strip()]
|
| 291 |
+
for para in paragraphs:
|
| 292 |
+
if para and len(para) > 10: # Avoid very short fragments
|
| 293 |
+
story.append(Paragraph(f"{para}.", styles['BodyText']))
|
| 294 |
+
|
| 295 |
+
story.append(Spacer(1, 0.2 * inch))
|
| 296 |
+
else:
|
| 297 |
+
# Fallback if no sections found - display as clean text
|
| 298 |
+
clean_feedback = clean_text_for_pdf(overall_feedback)
|
| 299 |
+
story.append(Paragraph(clean_feedback, styles['BodyText']))
|
| 300 |
+
|
| 301 |
+
except Exception as feedback_error:
|
| 302 |
+
print(f"⚠️ Error generating overall feedback: {feedback_error}")
|
| 303 |
+
story.append(Paragraph("Great work in this coaching session! You demonstrated solid product management thinking and approach to the scenarios presented.", styles['BodyText']))
|
| 304 |
+
|
| 305 |
+
story.append(Spacer(1, 0.3 * inch))
|
| 306 |
+
|
| 307 |
+
# Add progress chart with error handling
|
| 308 |
+
try:
|
| 309 |
+
skill_labels = define_skill_areas(coaching_data.get('type', 'General'))
|
| 310 |
+
chart_path = os.path.join(config.REPORT_FOLDER, "coaching_progress.png")
|
| 311 |
+
if os.path.exists(chart_path):
|
| 312 |
+
os.remove(chart_path)
|
| 313 |
+
|
| 314 |
+
if create_coaching_progress_chart(skill_labels, chart_path):
|
| 315 |
+
story.append(Image(chart_path, width=6*inch, height=4*inch, hAlign='CENTER'))
|
| 316 |
+
else:
|
| 317 |
+
story.append(Paragraph("Skills covered in this coaching session:", styles['SubHeader']))
|
| 318 |
+
for skill in skill_labels:
|
| 319 |
+
story.append(Paragraph(f"• {skill}", styles['BodyText']))
|
| 320 |
+
except Exception as chart_error:
|
| 321 |
+
print(f"⚠️ Error creating chart: {chart_error}")
|
| 322 |
+
story.append(Paragraph("Skills Development Summary", styles['SubHeader']))
|
| 323 |
+
story.append(Paragraph("This coaching session covered key product management competencies.", styles['BodyText']))
|
| 324 |
+
|
| 325 |
+
story.append(PageBreak())
|
| 326 |
+
|
| 327 |
+
# Detailed Scenario Analysis
|
| 328 |
+
story.append(Paragraph("Detailed Scenario Analysis", styles['SectionHeader']))
|
| 329 |
+
story.append(Spacer(1, 0.2 * inch))
|
| 330 |
+
|
| 331 |
+
for i, scenario in enumerate(coaching_data.get('q_and_a', [])):
|
| 332 |
+
try:
|
| 333 |
+
# Scenario Header with visual separation
|
| 334 |
+
story.append(Paragraph(f"Scenario {i+1}", styles['SubHeader']))
|
| 335 |
+
story.append(Spacer(1, 0.1 * inch))
|
| 336 |
+
|
| 337 |
+
# Challenge Description in highlighted box
|
| 338 |
+
story.append(Paragraph("Challenge:", styles['SubHeader']))
|
| 339 |
+
clean_question = clean_text_for_pdf(scenario.get('question', 'Product management scenario'))
|
| 340 |
+
story.append(Paragraph(clean_question, styles['HighlightBox']))
|
| 341 |
+
story.append(Spacer(1, 0.15 * inch))
|
| 342 |
+
|
| 343 |
+
# Participant's Response
|
| 344 |
+
story.append(Paragraph("Your Approach:", styles['SubHeader']))
|
| 345 |
+
clean_response = clean_text_for_pdf(scenario.get('response', 'Response provided'))
|
| 346 |
+
story.append(Paragraph(clean_response, styles['BodyText']))
|
| 347 |
+
story.append(Spacer(1, 0.1 * inch))
|
| 348 |
+
|
| 349 |
+
# Score Display with visual emphasis
|
| 350 |
+
if scenario.get('overall_score', 0) > 0:
|
| 351 |
+
score_text = f"Overall Score: {scenario['overall_score']}/10"
|
| 352 |
+
story.append(Paragraph(score_text, styles['ScoreText']))
|
| 353 |
+
story.append(Spacer(1, 0.1 * inch))
|
| 354 |
+
|
| 355 |
+
# Coaching Feedback - break into sections
|
| 356 |
+
story.append(Paragraph("Coaching Analysis:", styles['SubHeader']))
|
| 357 |
+
feedback_text = scenario.get('feedback', 'Great work on this scenario!')
|
| 358 |
+
feedback_sections = format_feedback_sections(feedback_text)
|
| 359 |
+
|
| 360 |
+
if feedback_sections:
|
| 361 |
+
for section in feedback_sections:
|
| 362 |
+
if section['title'] and section['content']:
|
| 363 |
+
# Section title
|
| 364 |
+
story.append(Paragraph(f"{section['title']}:", styles['SubHeader']))
|
| 365 |
+
# Section content - already cleaned by format_feedback_sections
|
| 366 |
+
story.append(Paragraph(section['content'], styles['BulletText']))
|
| 367 |
+
story.append(Spacer(1, 0.05 * inch))
|
| 368 |
+
else:
|
| 369 |
+
# Fallback - use original feedback with cleaning
|
| 370 |
+
clean_feedback = clean_text_for_pdf(feedback_text)
|
| 371 |
+
story.append(Paragraph(clean_feedback, styles['BodyText']))
|
| 372 |
+
|
| 373 |
+
# Add separator between scenarios
|
| 374 |
+
if i < len(coaching_data.get('q_and_a', [])) - 1:
|
| 375 |
+
story.append(Spacer(1, 0.3 * inch))
|
| 376 |
+
# Add a subtle line separator
|
| 377 |
+
story.append(HRFlowable(width="100%", thickness=1, lineCap='round', color=colors.lightgrey))
|
| 378 |
+
story.append(Spacer(1, 0.2 * inch))
|
| 379 |
+
|
| 380 |
+
except Exception as scenario_error:
|
| 381 |
+
print(f"⚠️ Error processing scenario {i+1}: {scenario_error}")
|
| 382 |
+
story.append(Paragraph(f"Scenario {i+1}: Completed successfully", styles['BodyText']))
|
| 383 |
+
story.append(Spacer(1, 0.2 * inch))
|
| 384 |
+
|
| 385 |
+
# Development Plan Section
|
| 386 |
+
story.append(PageBreak())
|
| 387 |
+
story.append(Paragraph("Your Product Management Development Plan", styles['SectionHeader']))
|
| 388 |
+
story.append(Spacer(1, 0.2 * inch))
|
| 389 |
+
|
| 390 |
+
# Focus Areas
|
| 391 |
+
story.append(Paragraph("Recommended Focus Areas", styles['SubHeader']))
|
| 392 |
+
focus_areas = [
|
| 393 |
+
f"Continue practicing {coaching_data.get('type', 'product management').lower()} scenarios with real-world applications",
|
| 394 |
+
"Explore and master relevant PM frameworks and methodologies",
|
| 395 |
+
"Seek feedback from peers and mentors on your product management approach",
|
| 396 |
+
"Apply these concepts in your current role or personal projects"
|
| 397 |
+
]
|
| 398 |
+
for area in focus_areas:
|
| 399 |
+
story.append(Paragraph(f"• {area}", styles['BulletText']))
|
| 400 |
+
story.append(Spacer(1, 0.2 * inch))
|
| 401 |
+
|
| 402 |
+
# Learning Resources
|
| 403 |
+
story.append(Paragraph("Suggested Learning Resources", styles['SubHeader']))
|
| 404 |
+
resources = [
|
| 405 |
+
'"Inspired" by Marty Cagan - Product management fundamentals',
|
| 406 |
+
'"The Lean Startup" by Eric Ries - Validation and iteration principles',
|
| 407 |
+
'"Hooked" by Nir Eyal - User engagement and product psychology',
|
| 408 |
+
'Product management communities and industry blogs for current best practices'
|
| 409 |
+
]
|
| 410 |
+
for resource in resources:
|
| 411 |
+
story.append(Paragraph(f"• {resource}", styles['BulletText']))
|
| 412 |
+
story.append(Spacer(1, 0.2 * inch))
|
| 413 |
+
|
| 414 |
+
# Next Steps
|
| 415 |
+
story.append(Paragraph("Next Steps for Growth", styles['SubHeader']))
|
| 416 |
+
next_steps_text = """Product management is a continuous learning journey. Use this coaching session as a foundation to build upon.
|
| 417 |
+
Regular practice with realistic scenarios, combined with framework application and stakeholder feedback,
|
| 418 |
+
will accelerate your development as a product manager.
|
| 419 |
+
|
| 420 |
+
Remember to stay curious about user needs, data-driven in your decisions, and collaborative in your approach.
|
| 421 |
+
The best product managers never stop learning and adapting."""
|
| 422 |
+
|
| 423 |
+
story.append(Paragraph(next_steps_text, styles['BodyText']))
|
| 424 |
+
|
| 425 |
+
# Footer
|
| 426 |
+
story.append(Spacer(1, 0.5 * inch))
|
| 427 |
+
story.append(Paragraph("Personal AI Product Coach - Developing Product Management Excellence",
|
| 428 |
+
styles['BodyText']))
|
| 429 |
+
|
| 430 |
+
# Build the PDF
|
| 431 |
+
doc.build(story)
|
| 432 |
+
print(f"✅ Professional coaching report generated: {file_path}")
|
| 433 |
+
return True
|
| 434 |
+
|
| 435 |
+
except Exception as e:
|
| 436 |
+
print(f"❌ Error generating PDF report: {e}")
|
| 437 |
+
print(f"❌ Error details: {str(e)}")
|
| 438 |
+
return False
|
| 439 |
+
|
| 440 |
+
# Legacy function for backward compatibility
|
| 441 |
+
def generate_holistic_feedback(interview_log):
|
| 442 |
+
"""Legacy function - redirects to generate_coaching_feedback."""
|
| 443 |
+
return "This coaching session focused on developing key product management skills through practical scenarios."
|
modules/stt_handler.py
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# modules/stt_handler.py
|
| 2 |
+
import speech_recognition as sr
|
| 3 |
+
import os
|
| 4 |
+
import tempfile
|
| 5 |
+
from pydub import AudioSegment
|
| 6 |
+
|
| 7 |
+
def transcribe_audio(audio_filepath):
|
| 8 |
+
"""Transcribe audio with multiple fallback methods"""
|
| 9 |
+
if not audio_filepath or not os.path.exists(audio_filepath):
|
| 10 |
+
print("❌ STT Error: No audio file provided or file does not exist.")
|
| 11 |
+
return "Sorry, I couldn't process your audio file. Please try recording again."
|
| 12 |
+
|
| 13 |
+
print(f"🎙️ Transcribing audio file: {audio_filepath}")
|
| 14 |
+
recognizer = sr.Recognizer()
|
| 15 |
+
|
| 16 |
+
try:
|
| 17 |
+
# Try to convert audio format if needed
|
| 18 |
+
audio_data = None
|
| 19 |
+
|
| 20 |
+
# First try direct speech recognition
|
| 21 |
+
try:
|
| 22 |
+
with sr.AudioFile(audio_filepath) as source:
|
| 23 |
+
audio_data = recognizer.record(source)
|
| 24 |
+
print("✅ Audio file loaded successfully")
|
| 25 |
+
except Exception as audio_load_error:
|
| 26 |
+
print(f"⚠️ Direct audio loading failed: {audio_load_error}")
|
| 27 |
+
|
| 28 |
+
# Try converting with pydub first
|
| 29 |
+
try:
|
| 30 |
+
print("🔄 Converting audio format...")
|
| 31 |
+
audio = AudioSegment.from_file(audio_filepath)
|
| 32 |
+
|
| 33 |
+
# Export as WAV for better compatibility
|
| 34 |
+
with tempfile.NamedTemporaryFile(delete=False, suffix=".wav") as temp_wav:
|
| 35 |
+
temp_wav_path = temp_wav.name
|
| 36 |
+
audio.export(temp_wav_path, format="wav")
|
| 37 |
+
|
| 38 |
+
with sr.AudioFile(temp_wav_path) as source:
|
| 39 |
+
audio_data = recognizer.record(source)
|
| 40 |
+
print("✅ Audio converted and loaded successfully")
|
| 41 |
+
|
| 42 |
+
# Clean up temp file
|
| 43 |
+
if os.path.exists(temp_wav_path):
|
| 44 |
+
os.remove(temp_wav_path)
|
| 45 |
+
|
| 46 |
+
except Exception as convert_error:
|
| 47 |
+
print(f"❌ Audio conversion failed: {convert_error}")
|
| 48 |
+
return "Sorry, I couldn't process your audio format. Please try recording again."
|
| 49 |
+
|
| 50 |
+
if not audio_data:
|
| 51 |
+
return "Sorry, I couldn't load your audio. Please try recording again."
|
| 52 |
+
|
| 53 |
+
# Try Whisper transcription
|
| 54 |
+
try:
|
| 55 |
+
print("🤖 Transcribing with Whisper...")
|
| 56 |
+
text = recognizer.recognize_whisper(audio_data, language="english")
|
| 57 |
+
print(f"✅ Transcription successful: {text[:100]}...")
|
| 58 |
+
return text if text.strip() else "I didn't catch what you said. Could you please speak more clearly?"
|
| 59 |
+
|
| 60 |
+
except sr.UnknownValueError:
|
| 61 |
+
print("⚠️ Whisper could not understand the audio")
|
| 62 |
+
return "I couldn't understand what you said. Please speak more clearly and try again."
|
| 63 |
+
|
| 64 |
+
except sr.RequestError as e:
|
| 65 |
+
print(f"⚠️ Whisper service error: {e}")
|
| 66 |
+
# Fallback to Google Web Speech API
|
| 67 |
+
try:
|
| 68 |
+
print("🔄 Falling back to Google Speech Recognition...")
|
| 69 |
+
text = recognizer.recognize_google(audio_data)
|
| 70 |
+
print(f"✅ Google transcription successful: {text[:100]}...")
|
| 71 |
+
return text if text.strip() else "I didn't catch what you said. Could you please try again?"
|
| 72 |
+
except Exception as google_error:
|
| 73 |
+
print(f"❌ Google fallback failed: {google_error}")
|
| 74 |
+
return "I'm having trouble with speech recognition. Please try again or check your microphone."
|
| 75 |
+
|
| 76 |
+
except Exception as e:
|
| 77 |
+
print(f"❌ Unexpected transcription error: {e}")
|
| 78 |
+
return f"Sorry, I encountered an error processing your audio. Please try recording again."
|
| 79 |
+
|
| 80 |
+
finally:
|
| 81 |
+
# Clean up the original audio file
|
| 82 |
+
if os.path.exists(audio_filepath):
|
| 83 |
+
try:
|
| 84 |
+
os.remove(audio_filepath)
|
| 85 |
+
print(f"🗑️ Cleaned up audio file: {audio_filepath}")
|
| 86 |
+
except OSError as e:
|
| 87 |
+
print(f"⚠️ Error deleting temp audio file {audio_filepath}: {e}")
|
modules/tts_handler.py
ADDED
|
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# modules/tts_handler.py
|
| 2 |
+
import subprocess
|
| 3 |
+
import platform
|
| 4 |
+
import sys
|
| 5 |
+
import os
|
| 6 |
+
import config
|
| 7 |
+
import tempfile
|
| 8 |
+
import numpy as np
|
| 9 |
+
import wave
|
| 10 |
+
|
| 11 |
+
def text_to_speech_file(text_to_speak):
|
| 12 |
+
"""Generate TTS audio file - with fallback to silent audio if TTS fails"""
|
| 13 |
+
print(f"AI generating audio for: {text_to_speak[:100]}...")
|
| 14 |
+
|
| 15 |
+
try:
|
| 16 |
+
# Try using system TTS (macOS say command)
|
| 17 |
+
if platform.system() == "Darwin": # macOS
|
| 18 |
+
with tempfile.NamedTemporaryFile(delete=False, suffix=".wav") as wav_file:
|
| 19 |
+
wav_filename = wav_file.name
|
| 20 |
+
|
| 21 |
+
# Use macOS 'say' command to generate speech
|
| 22 |
+
command = ['say', '-o', wav_filename, '--data-format=LEF32@22050', text_to_speak]
|
| 23 |
+
process = subprocess.run(command, capture_output=True, text=True, timeout=30)
|
| 24 |
+
|
| 25 |
+
if process.returncode == 0 and os.path.exists(wav_filename):
|
| 26 |
+
print("✅ TTS generated successfully with macOS 'say'")
|
| 27 |
+
return wav_filename
|
| 28 |
+
else:
|
| 29 |
+
print("❌ macOS 'say' command failed, falling back to silent audio")
|
| 30 |
+
return create_silent_audio()
|
| 31 |
+
else:
|
| 32 |
+
print("❌ TTS not available on this system, using silent audio")
|
| 33 |
+
return create_silent_audio()
|
| 34 |
+
|
| 35 |
+
except Exception as e:
|
| 36 |
+
print(f"❌ TTS generation failed: {e}")
|
| 37 |
+
return create_silent_audio()
|
| 38 |
+
|
| 39 |
+
def create_silent_audio():
|
| 40 |
+
"""Create a short silent audio file as fallback"""
|
| 41 |
+
try:
|
| 42 |
+
with tempfile.NamedTemporaryFile(delete=False, suffix=".wav") as wav_file:
|
| 43 |
+
wav_filename = wav_file.name
|
| 44 |
+
|
| 45 |
+
# Create 1 second of silence
|
| 46 |
+
sample_rate = 22050
|
| 47 |
+
duration = 1.0 # seconds
|
| 48 |
+
samples = int(sample_rate * duration)
|
| 49 |
+
audio_data = np.zeros(samples, dtype=np.int16)
|
| 50 |
+
|
| 51 |
+
# Write WAV file
|
| 52 |
+
with wave.open(wav_filename, 'w') as wav_file:
|
| 53 |
+
wav_file.setnchannels(1) # Mono
|
| 54 |
+
wav_file.setsampwidth(2) # 2 bytes per sample
|
| 55 |
+
wav_file.setframerate(sample_rate)
|
| 56 |
+
wav_file.writeframes(audio_data.tobytes())
|
| 57 |
+
|
| 58 |
+
print("✅ Silent audio fallback created")
|
| 59 |
+
return wav_filename
|
| 60 |
+
except Exception as e:
|
| 61 |
+
print(f"❌ Even silent audio creation failed: {e}")
|
| 62 |
+
return None
|
modules/web_search.py
ADDED
|
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# modules/web_search.py
|
| 2 |
+
|
| 3 |
+
def search_for_example_answers(query: str, num_results: int = 2):
|
| 4 |
+
"""
|
| 5 |
+
Performs a targeted web search for high-quality example answers to an interview question.
|
| 6 |
+
Note: Web search is disabled in this deployment to ensure compatibility.
|
| 7 |
+
"""
|
| 8 |
+
try:
|
| 9 |
+
from ddgs import DDGS
|
| 10 |
+
# Refine the query to find expert answers
|
| 11 |
+
search_query = f"expert sample answer for interview question: \"{query}\""
|
| 12 |
+
print(f"🌐 Searching for expert answers with query: '{search_query}'")
|
| 13 |
+
|
| 14 |
+
with DDGS(timeout=10) as ddgs:
|
| 15 |
+
results = list(ddgs.text(search_query, max_results=num_results))
|
| 16 |
+
|
| 17 |
+
if not results:
|
| 18 |
+
print(" -> No example answers found.")
|
| 19 |
+
return "No example answers found on the web."
|
| 20 |
+
|
| 21 |
+
formatted_results = ""
|
| 22 |
+
for i, res in enumerate(results):
|
| 23 |
+
formatted_results += f"Example Answer Source {i+1}:\nTitle: {res.get('title', 'N/A')}\nSnippet: {res.get('body', 'N/A')}\n\n"
|
| 24 |
+
|
| 25 |
+
print(f" -> Found {len(results)} example answers.")
|
| 26 |
+
return formatted_results
|
| 27 |
+
|
| 28 |
+
except ImportError:
|
| 29 |
+
print("💡 Web search not available in this deployment (ddgs not installed)")
|
| 30 |
+
return "Web search not available in this deployment environment."
|
| 31 |
+
except Exception as e:
|
| 32 |
+
print(f"💥 Web search failed: {e}")
|
| 33 |
+
return "Web search for example answers failed."
|
packages.txt
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
ffmpeg
|
| 2 |
+
portaudio19-dev
|
requirements.txt
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
gradio>=4.0.0,<5.0.0
|
| 2 |
+
groq
|
| 3 |
+
openai-whisper
|
| 4 |
+
pydub
|
| 5 |
+
soundfile
|
| 6 |
+
PyMuPDF
|
| 7 |
+
python-docx
|
| 8 |
+
reportlab
|
| 9 |
+
speechrecognition
|
| 10 |
+
matplotlib
|
| 11 |
+
numpy
|
| 12 |
+
python-dotenv
|
test_deployment.py
ADDED
|
@@ -0,0 +1,138 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Test script to check if all dependencies can be imported for Hugging Face deployment
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import sys
|
| 7 |
+
|
| 8 |
+
def test_imports():
|
| 9 |
+
"""Test all required imports"""
|
| 10 |
+
print("🧪 Testing deployment readiness...")
|
| 11 |
+
|
| 12 |
+
failed_imports = []
|
| 13 |
+
|
| 14 |
+
# Test core dependencies
|
| 15 |
+
try:
|
| 16 |
+
import gradio
|
| 17 |
+
print("✅ gradio")
|
| 18 |
+
except ImportError as e:
|
| 19 |
+
failed_imports.append(f"gradio: {e}")
|
| 20 |
+
print("❌ gradio")
|
| 21 |
+
|
| 22 |
+
try:
|
| 23 |
+
import groq
|
| 24 |
+
print("✅ groq")
|
| 25 |
+
except ImportError as e:
|
| 26 |
+
failed_imports.append(f"groq: {e}")
|
| 27 |
+
print("❌ groq")
|
| 28 |
+
|
| 29 |
+
try:
|
| 30 |
+
import whisper
|
| 31 |
+
print("✅ openai-whisper")
|
| 32 |
+
except ImportError as e:
|
| 33 |
+
failed_imports.append(f"openai-whisper: {e}")
|
| 34 |
+
print("❌ openai-whisper")
|
| 35 |
+
|
| 36 |
+
try:
|
| 37 |
+
import pydub
|
| 38 |
+
print("✅ pydub")
|
| 39 |
+
except ImportError as e:
|
| 40 |
+
failed_imports.append(f"pydub: {e}")
|
| 41 |
+
print("❌ pydub")
|
| 42 |
+
|
| 43 |
+
try:
|
| 44 |
+
import soundfile
|
| 45 |
+
print("✅ soundfile")
|
| 46 |
+
except ImportError as e:
|
| 47 |
+
failed_imports.append(f"soundfile: {e}")
|
| 48 |
+
print("❌ soundfile")
|
| 49 |
+
|
| 50 |
+
try:
|
| 51 |
+
import fitz # PyMuPDF
|
| 52 |
+
print("✅ PyMuPDF")
|
| 53 |
+
except ImportError as e:
|
| 54 |
+
failed_imports.append(f"PyMuPDF: {e}")
|
| 55 |
+
print("❌ PyMuPDF")
|
| 56 |
+
|
| 57 |
+
try:
|
| 58 |
+
import docx
|
| 59 |
+
print("✅ python-docx")
|
| 60 |
+
except ImportError as e:
|
| 61 |
+
failed_imports.append(f"python-docx: {e}")
|
| 62 |
+
print("❌ python-docx")
|
| 63 |
+
|
| 64 |
+
try:
|
| 65 |
+
import reportlab
|
| 66 |
+
print("✅ reportlab")
|
| 67 |
+
except ImportError as e:
|
| 68 |
+
failed_imports.append(f"reportlab: {e}")
|
| 69 |
+
print("❌ reportlab")
|
| 70 |
+
|
| 71 |
+
try:
|
| 72 |
+
import speech_recognition
|
| 73 |
+
print("✅ speechrecognition")
|
| 74 |
+
except ImportError as e:
|
| 75 |
+
failed_imports.append(f"speechrecognition: {e}")
|
| 76 |
+
print("❌ speechrecognition")
|
| 77 |
+
|
| 78 |
+
try:
|
| 79 |
+
import matplotlib
|
| 80 |
+
print("✅ matplotlib")
|
| 81 |
+
except ImportError as e:
|
| 82 |
+
failed_imports.append(f"matplotlib: {e}")
|
| 83 |
+
print("❌ matplotlib")
|
| 84 |
+
|
| 85 |
+
try:
|
| 86 |
+
import numpy
|
| 87 |
+
print("✅ numpy")
|
| 88 |
+
except ImportError as e:
|
| 89 |
+
failed_imports.append(f"numpy: {e}")
|
| 90 |
+
print("❌ numpy")
|
| 91 |
+
|
| 92 |
+
# Test module imports
|
| 93 |
+
print("\n📦 Testing custom modules...")
|
| 94 |
+
|
| 95 |
+
try:
|
| 96 |
+
import config
|
| 97 |
+
print("✅ config")
|
| 98 |
+
except ImportError as e:
|
| 99 |
+
failed_imports.append(f"config: {e}")
|
| 100 |
+
print("❌ config")
|
| 101 |
+
|
| 102 |
+
try:
|
| 103 |
+
from modules.llm_handler import generate_coaching_question
|
| 104 |
+
print("✅ modules.llm_handler")
|
| 105 |
+
except ImportError as e:
|
| 106 |
+
failed_imports.append(f"modules.llm_handler: {e}")
|
| 107 |
+
print("❌ modules.llm_handler")
|
| 108 |
+
|
| 109 |
+
try:
|
| 110 |
+
from modules.doc_processor import extract_text_from_document
|
| 111 |
+
print("✅ modules.doc_processor")
|
| 112 |
+
except ImportError as e:
|
| 113 |
+
failed_imports.append(f"modules.doc_processor: {e}")
|
| 114 |
+
print("❌ modules.doc_processor")
|
| 115 |
+
|
| 116 |
+
try:
|
| 117 |
+
from modules.report_generator import generate_pdf_report
|
| 118 |
+
print("✅ modules.report_generator")
|
| 119 |
+
except ImportError as e:
|
| 120 |
+
failed_imports.append(f"modules.report_generator: {e}")
|
| 121 |
+
print("❌ modules.report_generator")
|
| 122 |
+
|
| 123 |
+
# Summary
|
| 124 |
+
print("\n" + "="*50)
|
| 125 |
+
if failed_imports:
|
| 126 |
+
print(f"❌ {len(failed_imports)} import failures found:")
|
| 127 |
+
for failure in failed_imports:
|
| 128 |
+
print(f" - {failure}")
|
| 129 |
+
print("\n🚨 App may not work properly on Hugging Face!")
|
| 130 |
+
return False
|
| 131 |
+
else:
|
| 132 |
+
print("✅ All imports successful!")
|
| 133 |
+
print("🚀 Ready for Hugging Face deployment!")
|
| 134 |
+
return True
|
| 135 |
+
|
| 136 |
+
if __name__ == "__main__":
|
| 137 |
+
success = test_imports()
|
| 138 |
+
sys.exit(0 if success else 1)
|
test_report.py
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""Test script to verify report generation works correctly"""
|
| 3 |
+
|
| 4 |
+
import os
|
| 5 |
+
import sys
|
| 6 |
+
sys.path.append('.')
|
| 7 |
+
|
| 8 |
+
from modules.report_generator import generate_pdf_report
|
| 9 |
+
import config
|
| 10 |
+
|
| 11 |
+
# Test data
|
| 12 |
+
test_data = {
|
| 13 |
+
'name': 'Test User',
|
| 14 |
+
'type': 'Product Strategy & Vision',
|
| 15 |
+
'q_and_a': [
|
| 16 |
+
{
|
| 17 |
+
'question': 'How would you define the product vision for a new mobile app?',
|
| 18 |
+
'response': 'I would start by understanding the target users, their pain points, and the market opportunity. Then I would create a compelling vision that aligns with business goals.',
|
| 19 |
+
'feedback': 'Great approach! You showed strong strategic thinking and considered multiple stakeholders.',
|
| 20 |
+
'overall_score': 8,
|
| 21 |
+
'scores': {'Strategic Thinking': 8, 'Problem Analysis': 7}
|
| 22 |
+
}
|
| 23 |
+
]
|
| 24 |
+
}
|
| 25 |
+
|
| 26 |
+
# Create test report
|
| 27 |
+
test_report_path = os.path.join(config.REPORT_FOLDER, 'test_report.pdf')
|
| 28 |
+
print(f"🧪 Testing report generation...")
|
| 29 |
+
print(f"📁 Report will be saved to: {test_report_path}")
|
| 30 |
+
|
| 31 |
+
try:
|
| 32 |
+
result = generate_pdf_report(test_data, test_report_path)
|
| 33 |
+
if result:
|
| 34 |
+
print(f"✅ Test report generated successfully!")
|
| 35 |
+
print(f"📊 File size: {os.path.getsize(test_report_path)} bytes")
|
| 36 |
+
else:
|
| 37 |
+
print("❌ Report generation failed")
|
| 38 |
+
except Exception as e:
|
| 39 |
+
print(f"❌ Test failed with error: {e}")
|
voice_model/en_US-lessac-medium.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5efe09e69902187827af646e1a6e9d269dee769f9877d17b16b1b46eeaaf019f
|
| 3 |
+
size 63201294
|
voice_model/en_US-lessac-medium.onnx.json
ADDED
|
@@ -0,0 +1,493 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"audio": {
|
| 3 |
+
"sample_rate": 22050,
|
| 4 |
+
"quality": "medium"
|
| 5 |
+
},
|
| 6 |
+
"espeak": {
|
| 7 |
+
"voice": "en-us"
|
| 8 |
+
},
|
| 9 |
+
"inference": {
|
| 10 |
+
"noise_scale": 0.667,
|
| 11 |
+
"length_scale": 1,
|
| 12 |
+
"noise_w": 0.8
|
| 13 |
+
},
|
| 14 |
+
"phoneme_type": "espeak",
|
| 15 |
+
"phoneme_map": {},
|
| 16 |
+
"phoneme_id_map": {
|
| 17 |
+
"_": [
|
| 18 |
+
0
|
| 19 |
+
],
|
| 20 |
+
"^": [
|
| 21 |
+
1
|
| 22 |
+
],
|
| 23 |
+
"$": [
|
| 24 |
+
2
|
| 25 |
+
],
|
| 26 |
+
" ": [
|
| 27 |
+
3
|
| 28 |
+
],
|
| 29 |
+
"!": [
|
| 30 |
+
4
|
| 31 |
+
],
|
| 32 |
+
"'": [
|
| 33 |
+
5
|
| 34 |
+
],
|
| 35 |
+
"(": [
|
| 36 |
+
6
|
| 37 |
+
],
|
| 38 |
+
")": [
|
| 39 |
+
7
|
| 40 |
+
],
|
| 41 |
+
",": [
|
| 42 |
+
8
|
| 43 |
+
],
|
| 44 |
+
"-": [
|
| 45 |
+
9
|
| 46 |
+
],
|
| 47 |
+
".": [
|
| 48 |
+
10
|
| 49 |
+
],
|
| 50 |
+
":": [
|
| 51 |
+
11
|
| 52 |
+
],
|
| 53 |
+
";": [
|
| 54 |
+
12
|
| 55 |
+
],
|
| 56 |
+
"?": [
|
| 57 |
+
13
|
| 58 |
+
],
|
| 59 |
+
"a": [
|
| 60 |
+
14
|
| 61 |
+
],
|
| 62 |
+
"b": [
|
| 63 |
+
15
|
| 64 |
+
],
|
| 65 |
+
"c": [
|
| 66 |
+
16
|
| 67 |
+
],
|
| 68 |
+
"d": [
|
| 69 |
+
17
|
| 70 |
+
],
|
| 71 |
+
"e": [
|
| 72 |
+
18
|
| 73 |
+
],
|
| 74 |
+
"f": [
|
| 75 |
+
19
|
| 76 |
+
],
|
| 77 |
+
"h": [
|
| 78 |
+
20
|
| 79 |
+
],
|
| 80 |
+
"i": [
|
| 81 |
+
21
|
| 82 |
+
],
|
| 83 |
+
"j": [
|
| 84 |
+
22
|
| 85 |
+
],
|
| 86 |
+
"k": [
|
| 87 |
+
23
|
| 88 |
+
],
|
| 89 |
+
"l": [
|
| 90 |
+
24
|
| 91 |
+
],
|
| 92 |
+
"m": [
|
| 93 |
+
25
|
| 94 |
+
],
|
| 95 |
+
"n": [
|
| 96 |
+
26
|
| 97 |
+
],
|
| 98 |
+
"o": [
|
| 99 |
+
27
|
| 100 |
+
],
|
| 101 |
+
"p": [
|
| 102 |
+
28
|
| 103 |
+
],
|
| 104 |
+
"q": [
|
| 105 |
+
29
|
| 106 |
+
],
|
| 107 |
+
"r": [
|
| 108 |
+
30
|
| 109 |
+
],
|
| 110 |
+
"s": [
|
| 111 |
+
31
|
| 112 |
+
],
|
| 113 |
+
"t": [
|
| 114 |
+
32
|
| 115 |
+
],
|
| 116 |
+
"u": [
|
| 117 |
+
33
|
| 118 |
+
],
|
| 119 |
+
"v": [
|
| 120 |
+
34
|
| 121 |
+
],
|
| 122 |
+
"w": [
|
| 123 |
+
35
|
| 124 |
+
],
|
| 125 |
+
"x": [
|
| 126 |
+
36
|
| 127 |
+
],
|
| 128 |
+
"y": [
|
| 129 |
+
37
|
| 130 |
+
],
|
| 131 |
+
"z": [
|
| 132 |
+
38
|
| 133 |
+
],
|
| 134 |
+
"æ": [
|
| 135 |
+
39
|
| 136 |
+
],
|
| 137 |
+
"ç": [
|
| 138 |
+
40
|
| 139 |
+
],
|
| 140 |
+
"ð": [
|
| 141 |
+
41
|
| 142 |
+
],
|
| 143 |
+
"ø": [
|
| 144 |
+
42
|
| 145 |
+
],
|
| 146 |
+
"ħ": [
|
| 147 |
+
43
|
| 148 |
+
],
|
| 149 |
+
"ŋ": [
|
| 150 |
+
44
|
| 151 |
+
],
|
| 152 |
+
"œ": [
|
| 153 |
+
45
|
| 154 |
+
],
|
| 155 |
+
"ǀ": [
|
| 156 |
+
46
|
| 157 |
+
],
|
| 158 |
+
"ǁ": [
|
| 159 |
+
47
|
| 160 |
+
],
|
| 161 |
+
"ǂ": [
|
| 162 |
+
48
|
| 163 |
+
],
|
| 164 |
+
"ǃ": [
|
| 165 |
+
49
|
| 166 |
+
],
|
| 167 |
+
"ɐ": [
|
| 168 |
+
50
|
| 169 |
+
],
|
| 170 |
+
"ɑ": [
|
| 171 |
+
51
|
| 172 |
+
],
|
| 173 |
+
"ɒ": [
|
| 174 |
+
52
|
| 175 |
+
],
|
| 176 |
+
"ɓ": [
|
| 177 |
+
53
|
| 178 |
+
],
|
| 179 |
+
"ɔ": [
|
| 180 |
+
54
|
| 181 |
+
],
|
| 182 |
+
"ɕ": [
|
| 183 |
+
55
|
| 184 |
+
],
|
| 185 |
+
"ɖ": [
|
| 186 |
+
56
|
| 187 |
+
],
|
| 188 |
+
"ɗ": [
|
| 189 |
+
57
|
| 190 |
+
],
|
| 191 |
+
"ɘ": [
|
| 192 |
+
58
|
| 193 |
+
],
|
| 194 |
+
"ə": [
|
| 195 |
+
59
|
| 196 |
+
],
|
| 197 |
+
"ɚ": [
|
| 198 |
+
60
|
| 199 |
+
],
|
| 200 |
+
"ɛ": [
|
| 201 |
+
61
|
| 202 |
+
],
|
| 203 |
+
"ɜ": [
|
| 204 |
+
62
|
| 205 |
+
],
|
| 206 |
+
"ɞ": [
|
| 207 |
+
63
|
| 208 |
+
],
|
| 209 |
+
"ɟ": [
|
| 210 |
+
64
|
| 211 |
+
],
|
| 212 |
+
"ɠ": [
|
| 213 |
+
65
|
| 214 |
+
],
|
| 215 |
+
"ɡ": [
|
| 216 |
+
66
|
| 217 |
+
],
|
| 218 |
+
"ɢ": [
|
| 219 |
+
67
|
| 220 |
+
],
|
| 221 |
+
"ɣ": [
|
| 222 |
+
68
|
| 223 |
+
],
|
| 224 |
+
"ɤ": [
|
| 225 |
+
69
|
| 226 |
+
],
|
| 227 |
+
"ɥ": [
|
| 228 |
+
70
|
| 229 |
+
],
|
| 230 |
+
"ɦ": [
|
| 231 |
+
71
|
| 232 |
+
],
|
| 233 |
+
"ɧ": [
|
| 234 |
+
72
|
| 235 |
+
],
|
| 236 |
+
"ɨ": [
|
| 237 |
+
73
|
| 238 |
+
],
|
| 239 |
+
"ɪ": [
|
| 240 |
+
74
|
| 241 |
+
],
|
| 242 |
+
"ɫ": [
|
| 243 |
+
75
|
| 244 |
+
],
|
| 245 |
+
"ɬ": [
|
| 246 |
+
76
|
| 247 |
+
],
|
| 248 |
+
"ɭ": [
|
| 249 |
+
77
|
| 250 |
+
],
|
| 251 |
+
"ɮ": [
|
| 252 |
+
78
|
| 253 |
+
],
|
| 254 |
+
"ɯ": [
|
| 255 |
+
79
|
| 256 |
+
],
|
| 257 |
+
"ɰ": [
|
| 258 |
+
80
|
| 259 |
+
],
|
| 260 |
+
"ɱ": [
|
| 261 |
+
81
|
| 262 |
+
],
|
| 263 |
+
"ɲ": [
|
| 264 |
+
82
|
| 265 |
+
],
|
| 266 |
+
"ɳ": [
|
| 267 |
+
83
|
| 268 |
+
],
|
| 269 |
+
"ɴ": [
|
| 270 |
+
84
|
| 271 |
+
],
|
| 272 |
+
"ɵ": [
|
| 273 |
+
85
|
| 274 |
+
],
|
| 275 |
+
"ɶ": [
|
| 276 |
+
86
|
| 277 |
+
],
|
| 278 |
+
"ɸ": [
|
| 279 |
+
87
|
| 280 |
+
],
|
| 281 |
+
"ɹ": [
|
| 282 |
+
88
|
| 283 |
+
],
|
| 284 |
+
"ɺ": [
|
| 285 |
+
89
|
| 286 |
+
],
|
| 287 |
+
"ɻ": [
|
| 288 |
+
90
|
| 289 |
+
],
|
| 290 |
+
"ɽ": [
|
| 291 |
+
91
|
| 292 |
+
],
|
| 293 |
+
"ɾ": [
|
| 294 |
+
92
|
| 295 |
+
],
|
| 296 |
+
"ʀ": [
|
| 297 |
+
93
|
| 298 |
+
],
|
| 299 |
+
"ʁ": [
|
| 300 |
+
94
|
| 301 |
+
],
|
| 302 |
+
"ʂ": [
|
| 303 |
+
95
|
| 304 |
+
],
|
| 305 |
+
"ʃ": [
|
| 306 |
+
96
|
| 307 |
+
],
|
| 308 |
+
"ʄ": [
|
| 309 |
+
97
|
| 310 |
+
],
|
| 311 |
+
"ʈ": [
|
| 312 |
+
98
|
| 313 |
+
],
|
| 314 |
+
"ʉ": [
|
| 315 |
+
99
|
| 316 |
+
],
|
| 317 |
+
"ʊ": [
|
| 318 |
+
100
|
| 319 |
+
],
|
| 320 |
+
"ʋ": [
|
| 321 |
+
101
|
| 322 |
+
],
|
| 323 |
+
"ʌ": [
|
| 324 |
+
102
|
| 325 |
+
],
|
| 326 |
+
"ʍ": [
|
| 327 |
+
103
|
| 328 |
+
],
|
| 329 |
+
"ʎ": [
|
| 330 |
+
104
|
| 331 |
+
],
|
| 332 |
+
"ʏ": [
|
| 333 |
+
105
|
| 334 |
+
],
|
| 335 |
+
"ʐ": [
|
| 336 |
+
106
|
| 337 |
+
],
|
| 338 |
+
"ʑ": [
|
| 339 |
+
107
|
| 340 |
+
],
|
| 341 |
+
"ʒ": [
|
| 342 |
+
108
|
| 343 |
+
],
|
| 344 |
+
"ʔ": [
|
| 345 |
+
109
|
| 346 |
+
],
|
| 347 |
+
"ʕ": [
|
| 348 |
+
110
|
| 349 |
+
],
|
| 350 |
+
"ʘ": [
|
| 351 |
+
111
|
| 352 |
+
],
|
| 353 |
+
"ʙ": [
|
| 354 |
+
112
|
| 355 |
+
],
|
| 356 |
+
"ʛ": [
|
| 357 |
+
113
|
| 358 |
+
],
|
| 359 |
+
"ʜ": [
|
| 360 |
+
114
|
| 361 |
+
],
|
| 362 |
+
"ʝ": [
|
| 363 |
+
115
|
| 364 |
+
],
|
| 365 |
+
"ʟ": [
|
| 366 |
+
116
|
| 367 |
+
],
|
| 368 |
+
"ʡ": [
|
| 369 |
+
117
|
| 370 |
+
],
|
| 371 |
+
"ʢ": [
|
| 372 |
+
118
|
| 373 |
+
],
|
| 374 |
+
"ʲ": [
|
| 375 |
+
119
|
| 376 |
+
],
|
| 377 |
+
"ˈ": [
|
| 378 |
+
120
|
| 379 |
+
],
|
| 380 |
+
"ˌ": [
|
| 381 |
+
121
|
| 382 |
+
],
|
| 383 |
+
"ː": [
|
| 384 |
+
122
|
| 385 |
+
],
|
| 386 |
+
"ˑ": [
|
| 387 |
+
123
|
| 388 |
+
],
|
| 389 |
+
"˞": [
|
| 390 |
+
124
|
| 391 |
+
],
|
| 392 |
+
"β": [
|
| 393 |
+
125
|
| 394 |
+
],
|
| 395 |
+
"θ": [
|
| 396 |
+
126
|
| 397 |
+
],
|
| 398 |
+
"χ": [
|
| 399 |
+
127
|
| 400 |
+
],
|
| 401 |
+
"ᵻ": [
|
| 402 |
+
128
|
| 403 |
+
],
|
| 404 |
+
"ⱱ": [
|
| 405 |
+
129
|
| 406 |
+
],
|
| 407 |
+
"0": [
|
| 408 |
+
130
|
| 409 |
+
],
|
| 410 |
+
"1": [
|
| 411 |
+
131
|
| 412 |
+
],
|
| 413 |
+
"2": [
|
| 414 |
+
132
|
| 415 |
+
],
|
| 416 |
+
"3": [
|
| 417 |
+
133
|
| 418 |
+
],
|
| 419 |
+
"4": [
|
| 420 |
+
134
|
| 421 |
+
],
|
| 422 |
+
"5": [
|
| 423 |
+
135
|
| 424 |
+
],
|
| 425 |
+
"6": [
|
| 426 |
+
136
|
| 427 |
+
],
|
| 428 |
+
"7": [
|
| 429 |
+
137
|
| 430 |
+
],
|
| 431 |
+
"8": [
|
| 432 |
+
138
|
| 433 |
+
],
|
| 434 |
+
"9": [
|
| 435 |
+
139
|
| 436 |
+
],
|
| 437 |
+
"̧": [
|
| 438 |
+
140
|
| 439 |
+
],
|
| 440 |
+
"̃": [
|
| 441 |
+
141
|
| 442 |
+
],
|
| 443 |
+
"̪": [
|
| 444 |
+
142
|
| 445 |
+
],
|
| 446 |
+
"̯": [
|
| 447 |
+
143
|
| 448 |
+
],
|
| 449 |
+
"̩": [
|
| 450 |
+
144
|
| 451 |
+
],
|
| 452 |
+
"ʰ": [
|
| 453 |
+
145
|
| 454 |
+
],
|
| 455 |
+
"ˤ": [
|
| 456 |
+
146
|
| 457 |
+
],
|
| 458 |
+
"ε": [
|
| 459 |
+
147
|
| 460 |
+
],
|
| 461 |
+
"↓": [
|
| 462 |
+
148
|
| 463 |
+
],
|
| 464 |
+
"#": [
|
| 465 |
+
149
|
| 466 |
+
],
|
| 467 |
+
"\"": [
|
| 468 |
+
150
|
| 469 |
+
],
|
| 470 |
+
"↑": [
|
| 471 |
+
151
|
| 472 |
+
],
|
| 473 |
+
"̺": [
|
| 474 |
+
152
|
| 475 |
+
],
|
| 476 |
+
"̻": [
|
| 477 |
+
153
|
| 478 |
+
]
|
| 479 |
+
},
|
| 480 |
+
"num_symbols": 256,
|
| 481 |
+
"num_speakers": 1,
|
| 482 |
+
"speaker_id_map": {},
|
| 483 |
+
"piper_version": "1.0.0",
|
| 484 |
+
"language": {
|
| 485 |
+
"code": "en_US",
|
| 486 |
+
"family": "en",
|
| 487 |
+
"region": "US",
|
| 488 |
+
"name_native": "English",
|
| 489 |
+
"name_english": "English",
|
| 490 |
+
"country_english": "United States"
|
| 491 |
+
},
|
| 492 |
+
"dataset": "lessac"
|
| 493 |
+
}
|