Spaces:
Running
A newer version of the Gradio SDK is available:
6.1.0
AI Career Assistant
An AI-powered career assistant that represents professionals on their websites, answering questions about their background while facilitating follow-up contact for qualified opportunities. Built with a template-based architecture using OpenAI's latest structured output features and a simple prompt management system.
Features
- Intelligent Q&A: Answers questions about professional background using resume, LinkedIn, and summary documents
- GitHub Integration: Real-time repository analysis and project showcasing
- Job Matching: LLM-powered job fit analysis with detailed skill assessments
- Contact Facilitation: Contact routing based on query type and job match quality
- Response Evaluation: Built-in quality control system to prevent hallucinations
- Template-Based Prompts: Maintainable prompt management with composition and variable substitution
- Push Notifications: Pushover integration for real-time alerts
- Web Interface: Clean Gradio-based chat interface
Architecture
This project follows a template-based prompt architecture with clear separation of concerns:
personal-ai/
βββ models/ # Data models & schemas
β βββ config.py # Configuration classes
β βββ evaluation.py # Response evaluation models
β βββ job_match.py # Job analysis models
β βββ responses.py # Structured response models
βββ prompts/ # Template-based prompt management
β βββ chat_init.md # Main AI assistant system prompt
β βββ chat_base.md # Base system prompt (for rerun)
β βββ chat_rerun.md # Response regeneration template
β βββ evaluator.md # Response evaluation prompt
β βββ evaluator_with_github_context.md # GitHub-enhanced evaluator
β βββ job_match_analysis.md # Job matching analysis prompt
βββ docs/ # Documentation
β βββ prompt-refactoring-plan.md # Prompt management architecture
βββ me/ # Professional documents
β βββ resume.pdf # Professional resume
β βββ linkedin.pdf # LinkedIn profile export
β βββ summary.txt # Professional summary
βββ promptkit.py # Template rendering engine
βββ career_chatbot.py # Main application with integrated services
βββ README.md # This documentation
Prompt Management System
This application features a template-based prompt management system that separates AI prompts from Python code for better maintainability and flexibility.
Key Components
promptkit.py: Template rendering engine with variable substitutionprompts/directory: All AI prompts stored as markdown templates- Template composition: Complex prompts built by composing simpler templates
- Variable substitution: Dynamic content injection using
{variable}syntax
Template Features
Variable Substitution:
You are an AI assistant representing {config.name}.
Current date: {current_date}
Template Composition:
{base_evaluator_prompt}
## GitHub Tool Results:
{github_context}
Conditional Logic:
# In Python code
github_tools = "Use GitHub tools for repo questions" if web_search_service else ""
vars = {"github_tools": github_tools}
Prompt Templates
chat_init.md: Main conversational AI prompt with behavioral rulesevaluator.md: Response quality control and hallucination detectionevaluator_with_github_context.md: Enhanced evaluator for GitHub tool responsesjob_match_analysis.md: Job matching analysischat_rerun.md: Response regeneration with evaluator feedbackchat_base.md: Base conversational prompt without evaluation context
Benefits
- π§ Maintainable: Edit prompts without touching Python code
- π Version Control Friendly: Clear diffs for prompt changes
- π§© Composable: Build complex prompts from reusable components
- π― Consistent: Unified variable substitution approach
- π§ͺ Testable: Prompts can be tested independently
Installation
Option 1: Using uv (Recommended)
Install uv (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | sh # or with pip: pip install uvClone and navigate to the project:
cd personal-aiCreate virtual environment and install dependencies:
uv venv source .venv/bin/activate # On Windows: .venv\Scripts\activate uv pip install -r requirements.txt # Alternative: Install using pyproject.toml # uv pip install -e .
Option 2: Using pip (Traditional)
Clone and navigate to the project:
cd personal-aiCreate virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activateInstall dependencies:
pip install -r requirements.txtSet up environment variables: Create a
.envfile in the parent directory with:OPENAI_API_KEY=your_openai_api_key GEMINI_API_KEY=your_gemini_api_key # For evaluation GITHUB_USERNAME=your_github_username # Optional GITHUB_TOKEN=your_github_token # Optional, for higher rate limits PUSHOVER_USER=your_pushover_user # Optional PUSHOVER_TOKEN=your_pushover_token # OptionalPrepare your documents: Place your professional documents in the
me/directory:resume.pdf- Your resumelinkedin.pdf- LinkedIn profile exportsummary.txt- Professional summary
Usage
Basic Usage
python career_chatbot.py
Programmatic Usage
from models import ChatbotConfig
from career_chatbot import CareerChatbot
config = ChatbotConfig(
name="Your Name",
github_username="your_username"
)
chatbot = CareerChatbot(config)
chatbot.launch_interface()
Prompt Customization
from promptkit import render
# Custom prompt rendering
vars = {
"config": config,
"context": context,
"current_date": "September 6, 2025"
}
prompt = render("prompts/chat_init.md", vars)
Configuration
The ChatbotConfig class supports extensive customization:
config = ChatbotConfig(
name="Professional Name",
github_username="github_user",
resume_path="me/resume.pdf",
linkedin_path="me/linkedin.pdf",
summary_path="me/summary.txt",
model="gpt-4o-mini-2024-07-18",
evaluator_model="gemini-2.5-flash",
job_matching_model="gpt-4o-2024-08-06",
job_match_threshold="Good"
)
AI Agent Tools
The system includes several specialized tools:
record_user_details: Captures contact information for follow-upevaluate_job_match: Analyzes job fit using advanced LLM reasoningsearch_github_repos: Retrieves and analyzes GitHub repositoriesget_repo_details: Provides detailed repository information
Job Matching
The job matching system uses a sophisticated 6-level hierarchy:
- Very Strong (90%+ skills): Minimal gaps, excellent fit
- Strong (70-89% skills): Few gaps, strong candidate
- Good (50-69% skills): Manageable gaps, solid fit
- Moderate (30-49% skills): Significant gaps, some foundation
- Weak (10-29% skills): Major gaps, limited relevance
- Very Weak (<10% skills): Complete domain mismatch
Quality Control
Evaluation system with template-based prompts prevents hallucinations:
Evaluation Features
- Factual Validation: All claims verified against source documents and GitHub tool results
- Tool Usage Verification: Ensures appropriate tool selection and detects missing tool calls
- Behavioral Rules: Enforces proper contact facilitation logic
- Date Context Awareness: Proper temporal validation using system date context
- GitHub Tool Integration: Special handling for repository data and metadata
- Retry Mechanism: Automatically regenerates poor responses with evaluator feedback
Evaluation Templates
- Base Evaluator: Strict validation against resume/LinkedIn context
- GitHub-Enhanced: Accepts repository data as legitimate additional context
- Job Matching: Specialized evaluation for technical skill assessments
Evaluation Process
- Structured Response Generation: AI provides response with reasoning and evidence
- Context-Aware Evaluation: Template-based evaluation with current date and tool context
- Automatic Retry: Failed responses regenerated with specific feedback
- Quality Assurance: Only validated responses reach the user
Development
Local Development
With uv (Recommended):
# Create and activate virtual environment
uv venv
source .venv/bin/activate
# Install dependencies
uv pip install -r requirements.txt
# Run the application
python career_chatbot.py
# Optional: Run with development tools
ruff check . # Linting (if configured)
With pip:
# Install dependencies
pip install -r requirements.txt
# Run the application
python career_chatbot.py
Prompt Development
Edit prompts directly in the prompts/ directory:
# Edit main chat prompt
vim prompts/chat_init.md
# Edit evaluator prompt
vim prompts/evaluator.md
# Test changes immediately - no restart required
# Prompts are loaded fresh on each request
Example Interactions
Professional Question:
"What experience does this person have with robotics?"
Job Matching:
"Here's a Senior Robotics Engineer position at Boston Dynamics. How well would this person fit?"
GitHub Projects:
"Can you show me some of their open source work?"
Testing
# Test the application
python career_chatbot.py
# Test prompt rendering
python -c "from promptkit import render; print('Template system works')"
# Test model imports
python -c "from models import ChatbotConfig; print('Models loaded successfully')"