Spaces:
Sleeping
Sleeping
github-actions[bot]
commited on
Commit
·
2f9b277
1
Parent(s):
f83ffe5
Deploy from GitHub Actions
Browse files- .github/workflows/deploy.yml +4 -10
- README.md +293 -9
- space_repo/.env.example +17 -0
- space_repo/.github/workflows/deploy.yml +49 -0
- space_repo/.gitignore +75 -0
- space_repo/README.md +291 -16
- space_repo/app.py +641 -0
- space_repo/hirewithia_test_results.json +18 -0
- space_repo/requirements.txt +41 -3
- space_repo/space_repo/.gitattributes +35 -0
- space_repo/space_repo/Dockerfile +20 -0
- space_repo/space_repo/README.md +19 -0
- space_repo/space_repo/requirements.txt +3 -0
- space_repo/space_repo/src/streamlit_app.py +40 -0
- space_repo/vercel.json +23 -0
.github/workflows/deploy.yml
CHANGED
|
@@ -3,43 +3,37 @@ name: Deploy to Hugging Face Spaces
|
|
| 3 |
on:
|
| 4 |
push:
|
| 5 |
branches:
|
| 6 |
-
- main
|
| 7 |
|
| 8 |
jobs:
|
| 9 |
deploy:
|
| 10 |
runs-on: ubuntu-latest
|
| 11 |
|
| 12 |
steps:
|
| 13 |
-
|
| 14 |
-
- name: Checkout Repository
|
| 15 |
uses: actions/checkout@v3
|
| 16 |
|
| 17 |
-
# 2. Configure git user
|
| 18 |
- name: Configure Git
|
| 19 |
run: |
|
| 20 |
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
| 21 |
git config --global user.name "github-actions[bot]"
|
| 22 |
|
| 23 |
-
# 3. Clone Hugging Face Space
|
| 24 |
- name: Clone Hugging Face Space
|
| 25 |
env:
|
| 26 |
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
| 27 |
run: |
|
| 28 |
git clone https://Jayanthk2004:$HF_TOKEN@huggingface.co/spaces/Jayanthk2004/HireWithAi space_repo
|
| 29 |
|
| 30 |
-
# 4. Copy all repo files to Space (avoid nested space_repo)
|
| 31 |
- name: Copy Files
|
| 32 |
run: |
|
| 33 |
rsync -av --exclude='.git' ./ space_repo/
|
| 34 |
|
| 35 |
-
|
| 36 |
-
- name: Install Dependencies
|
| 37 |
run: |
|
| 38 |
pip install --upgrade pip
|
| 39 |
pip install -r space_repo/requirements.txt
|
| 40 |
|
| 41 |
-
|
| 42 |
-
- name: Push to Hugging Face Space
|
| 43 |
env:
|
| 44 |
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
| 45 |
run: |
|
|
|
|
| 3 |
on:
|
| 4 |
push:
|
| 5 |
branches:
|
| 6 |
+
- main
|
| 7 |
|
| 8 |
jobs:
|
| 9 |
deploy:
|
| 10 |
runs-on: ubuntu-latest
|
| 11 |
|
| 12 |
steps:
|
| 13 |
+
- name: Checkout Repo
|
|
|
|
| 14 |
uses: actions/checkout@v3
|
| 15 |
|
|
|
|
| 16 |
- name: Configure Git
|
| 17 |
run: |
|
| 18 |
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
| 19 |
git config --global user.name "github-actions[bot]"
|
| 20 |
|
|
|
|
| 21 |
- name: Clone Hugging Face Space
|
| 22 |
env:
|
| 23 |
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
| 24 |
run: |
|
| 25 |
git clone https://Jayanthk2004:$HF_TOKEN@huggingface.co/spaces/Jayanthk2004/HireWithAi space_repo
|
| 26 |
|
|
|
|
| 27 |
- name: Copy Files
|
| 28 |
run: |
|
| 29 |
rsync -av --exclude='.git' ./ space_repo/
|
| 30 |
|
| 31 |
+
- name: Install Dependencies (optional)
|
|
|
|
| 32 |
run: |
|
| 33 |
pip install --upgrade pip
|
| 34 |
pip install -r space_repo/requirements.txt
|
| 35 |
|
| 36 |
+
- name: Commit & Push to Hugging Face
|
|
|
|
| 37 |
env:
|
| 38 |
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
| 39 |
run: |
|
README.md
CHANGED
|
@@ -1,10 +1,294 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
colorTo: indigo
|
| 6 |
-
sdk: streamlit
|
| 7 |
-
sdk_version: 1.49.1
|
| 8 |
-
app_file: app.py
|
| 9 |
-
pinned: false
|
| 10 |
-
---
|
|
|
|
| 1 |
+
# HireWithAI - Smart Resume Screening System 🤖
|
| 2 |
+
|
| 3 |
+
An AI-powered recruitment platform built with **multi-agent architecture** using CrewAI and GROQ API for fast inference. This system automates resume screening and ranking, reducing **~70% of time-to-hire**.
|
| 4 |
+
|
| 5 |
+
## 🚀 Features
|
| 6 |
+
|
| 7 |
+
- **Multi-Agent Architecture**: 3 specialized AI agents working together
|
| 8 |
+
- **Resume Parser Agent**: Extracts structured candidate data from PDF/DOCX/TXT files
|
| 9 |
+
- **Skill Matcher Agent**: Matches extracted skills to job descriptions using NLP
|
| 10 |
+
- **Ranking Agent**: Ranks candidates based on relevance and generates shortlist
|
| 11 |
+
- **Fast Inference**: Powered by GROQ API for rapid processing
|
| 12 |
+
- **User-Friendly Interface**: Simple Streamlit web interface
|
| 13 |
+
- **Multiple File Formats**: Supports PDF, DOCX, and TXT resume uploads
|
| 14 |
+
- **Real-time Processing**: Live progress tracking and results display
|
| 15 |
+
- **Exportable Results**: Download analysis results in JSON format
|
| 16 |
+
|
| 17 |
+
## 🏗️ Architecture
|
| 18 |
+
|
| 19 |
+
```
|
| 20 |
+
┌─────────────────────────────────────────┐
|
| 21 |
+
│ Streamlit Frontend │
|
| 22 |
+
├─────────────────────────────────────────┤
|
| 23 |
+
│ CrewAI Core │
|
| 24 |
+
├─────────────────┬───────────────────────┤
|
| 25 |
+
│ Resume Parser │ Skill Matcher Agent │
|
| 26 |
+
│ Agent │ │
|
| 27 |
+
├─────────────────┼───────────────────────┤
|
| 28 |
+
│ Ranking Agent │
|
| 29 |
+
├─────────────────────────────────────────┤
|
| 30 |
+
│ GROQ API │
|
| 31 |
+
└─────────────────────────────────────────┘
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
## 📋 Prerequisites
|
| 35 |
+
|
| 36 |
+
- **Python 3.10-3.13** (Python 3.11 recommended)
|
| 37 |
+
- **Windows 10/11** (for this guide)
|
| 38 |
+
- **VS Code** installed
|
| 39 |
+
- **GROQ API Key** (free from https://console.groq.com/)
|
| 40 |
+
|
| 41 |
+
## 🛠️ Local Setup Instructions (Windows)
|
| 42 |
+
|
| 43 |
+
### Step 1: Clone/Download the Project
|
| 44 |
+
|
| 45 |
+
1. Create a new folder for your project:
|
| 46 |
+
```cmd
|
| 47 |
+
mkdir HireWithAI
|
| 48 |
+
cd HireWithAI
|
| 49 |
+
```
|
| 50 |
+
|
| 51 |
+
2. Save the provided files (`app.py`, `requirements.txt`, `vercel.json`) in this folder
|
| 52 |
+
|
| 53 |
+
### Step 2: Set Up Python Virtual Environment in VS Code
|
| 54 |
+
|
| 55 |
+
1. **Open VS Code in project folder**:
|
| 56 |
+
```cmd
|
| 57 |
+
code .
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
2. **Create Virtual Environment**:
|
| 61 |
+
- Open Command Palette: `Ctrl+Shift+P`
|
| 62 |
+
- Type: `Python: Create Environment`
|
| 63 |
+
- Select: `Venv`
|
| 64 |
+
- Choose your Python interpreter (3.10+)
|
| 65 |
+
- Select `requirements.txt` when prompted
|
| 66 |
+
|
| 67 |
+
**OR using terminal**:
|
| 68 |
+
```cmd
|
| 69 |
+
python -m venv venv
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
3. **Activate Virtual Environment**:
|
| 73 |
+
|
| 74 |
+
**In VS Code Terminal**:
|
| 75 |
+
```cmd
|
| 76 |
+
venv\Scripts\activate
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
**If you get execution policy error on Windows**:
|
| 80 |
+
```powershell
|
| 81 |
+
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
|
| 82 |
+
venv\Scripts\activate
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
4. **Verify activation** - you should see `(venv)` in your terminal prompt
|
| 86 |
+
|
| 87 |
+
### Step 3: Install Dependencies
|
| 88 |
+
|
| 89 |
+
1. **Upgrade pip**:
|
| 90 |
+
```cmd
|
| 91 |
+
python -m pip install --upgrade pip
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
2. **Install requirements**:
|
| 95 |
+
```cmd
|
| 96 |
+
pip install -r requirements.txt
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
3. **Install spaCy English model**:
|
| 100 |
+
```cmd
|
| 101 |
+
python -m spacy download en_core_web_sm
|
| 102 |
+
```
|
| 103 |
+
|
| 104 |
+
### Step 4: Set Up GROQ API Key
|
| 105 |
+
|
| 106 |
+
1. **Get GROQ API Key**:
|
| 107 |
+
- Visit https://console.groq.com/
|
| 108 |
+
- Create a free account
|
| 109 |
+
- Go to "API Keys" section
|
| 110 |
+
- Click "Create API Key"
|
| 111 |
+
- Copy the generated key
|
| 112 |
+
|
| 113 |
+
2. **Create environment file** (optional):
|
| 114 |
+
Create `.env` file in project root:
|
| 115 |
+
```env
|
| 116 |
+
GROQ_API_KEY=your_api_key_here
|
| 117 |
+
```
|
| 118 |
+
|
| 119 |
+
### Step 5: Run the Application
|
| 120 |
+
|
| 121 |
+
1. **Start Streamlit app**:
|
| 122 |
+
```cmd
|
| 123 |
+
streamlit run app.py
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
2. **Open in browser**:
|
| 127 |
+
- The app will automatically open at `http://localhost:8501`
|
| 128 |
+
- If not, click the link shown in terminal
|
| 129 |
+
|
| 130 |
+
3. **Enter your GROQ API key** in the sidebar when the app loads
|
| 131 |
+
|
| 132 |
+
## 🌐 Vercel Deployment Instructions
|
| 133 |
+
|
| 134 |
+
### Step 1: Prepare for Deployment
|
| 135 |
+
|
| 136 |
+
1. **Install Vercel CLI**:
|
| 137 |
+
```cmd
|
| 138 |
+
npm install -g vercel
|
| 139 |
+
```
|
| 140 |
+
|
| 141 |
+
2. **Ensure all files are ready**:
|
| 142 |
+
```
|
| 143 |
+
HireWithAI/
|
| 144 |
+
├── app.py
|
| 145 |
+
├── requirements.txt
|
| 146 |
+
├── vercel.json
|
| 147 |
+
└── README.md
|
| 148 |
+
```
|
| 149 |
+
|
| 150 |
+
### Step 2: Initialize Git Repository
|
| 151 |
+
|
| 152 |
+
```cmd
|
| 153 |
+
git init
|
| 154 |
+
git add .
|
| 155 |
+
git commit -m "Initial commit: HireWithAI app"
|
| 156 |
+
```
|
| 157 |
+
|
| 158 |
+
### Step 3: Deploy to Vercel
|
| 159 |
+
|
| 160 |
+
1. **Login to Vercel**:
|
| 161 |
+
```cmd
|
| 162 |
+
vercel login
|
| 163 |
+
```
|
| 164 |
+
|
| 165 |
+
2. **Deploy**:
|
| 166 |
+
```cmd
|
| 167 |
+
vercel
|
| 168 |
+
```
|
| 169 |
+
|
| 170 |
+
3. **Follow the prompts**:
|
| 171 |
+
- Set up and deploy? `Y`
|
| 172 |
+
- Which scope? (select your account)
|
| 173 |
+
- Link to existing project? `N`
|
| 174 |
+
- Project name? `hirewithia` (or your preferred name)
|
| 175 |
+
- Directory? `./` (current directory)
|
| 176 |
+
|
| 177 |
+
4. **Set environment variables** (in Vercel dashboard):
|
| 178 |
+
- Go to your project dashboard on vercel.com
|
| 179 |
+
- Navigate to "Settings" → "Environment Variables"
|
| 180 |
+
- Add: `GROQ_API_KEY` with your API key value
|
| 181 |
+
|
| 182 |
+
### Step 4: Production Deployment
|
| 183 |
+
|
| 184 |
+
```cmd
|
| 185 |
+
vercel --prod
|
| 186 |
+
```
|
| 187 |
+
|
| 188 |
+
## 📖 Usage Guide
|
| 189 |
+
|
| 190 |
+
### 1. Enter Job Description
|
| 191 |
+
- Navigate to the "Job Description" tab
|
| 192 |
+
- Paste the job requirements and description
|
| 193 |
+
- Click "Save Job Description"
|
| 194 |
+
|
| 195 |
+
### 2. Upload Resumes
|
| 196 |
+
- Go to "Upload Resumes" tab
|
| 197 |
+
- Upload multiple PDF/DOCX/TXT resume files
|
| 198 |
+
- Click "Process Resumes" to start AI analysis
|
| 199 |
+
|
| 200 |
+
### 3. View Results
|
| 201 |
+
- Check the "Results" tab for:
|
| 202 |
+
- Candidate rankings with scores
|
| 203 |
+
- Detailed skill analysis
|
| 204 |
+
- Individual candidate breakdowns
|
| 205 |
+
- Downloadable results
|
| 206 |
+
|
| 207 |
+
## 🔧 Configuration Options
|
| 208 |
+
|
| 209 |
+
### GROQ Models Available:
|
| 210 |
+
- **llama-3.1-70b-versatile** (Recommended) - Best accuracy
|
| 211 |
+
- **llama-3.1-8b-instant** (Fastest) - Quick processing
|
| 212 |
+
- **mixtral-8x7b-32768** - Alternative model
|
| 213 |
+
|
| 214 |
+
### Supported File Formats:
|
| 215 |
+
- **PDF** - Most common resume format
|
| 216 |
+
- **DOCX** - Microsoft Word documents
|
| 217 |
+
- **TXT** - Plain text files
|
| 218 |
+
|
| 219 |
+
## 🐛 Troubleshooting
|
| 220 |
+
|
| 221 |
+
### Common Issues:
|
| 222 |
+
|
| 223 |
+
1. **spaCy model not found**:
|
| 224 |
+
```cmd
|
| 225 |
+
python -m spacy download en_core_web_sm
|
| 226 |
+
```
|
| 227 |
+
|
| 228 |
+
2. **GROQ API errors**:
|
| 229 |
+
- Verify your API key is correct
|
| 230 |
+
- Check your GROQ account quota
|
| 231 |
+
- Ensure stable internet connection
|
| 232 |
+
|
| 233 |
+
3. **PDF parsing issues**:
|
| 234 |
+
- Ensure PDFs are not image-only
|
| 235 |
+
- Try converting to DOCX if needed
|
| 236 |
+
- Check file is not corrupted
|
| 237 |
+
|
| 238 |
+
4. **Streamlit not starting**:
|
| 239 |
+
- Ensure virtual environment is activated
|
| 240 |
+
- Check all dependencies are installed
|
| 241 |
+
- Try: `pip install streamlit --upgrade`
|
| 242 |
+
|
| 243 |
+
5. **Vercel deployment issues**:
|
| 244 |
+
- Ensure `vercel.json` is configured correctly
|
| 245 |
+
- Check Python version compatibility
|
| 246 |
+
- Verify all environment variables are set
|
| 247 |
+
|
| 248 |
+
### Performance Tips:
|
| 249 |
+
|
| 250 |
+
- Use **llama-3.1-8b-instant** for faster processing
|
| 251 |
+
- Process smaller batches of resumes (5-10 at a time)
|
| 252 |
+
- Ensure stable internet for GROQ API calls
|
| 253 |
+
- Use SSD storage for better file processing
|
| 254 |
+
|
| 255 |
+
## 📊 Expected Performance
|
| 256 |
+
|
| 257 |
+
- **Processing Time**: 30-60 seconds per resume
|
| 258 |
+
- **Accuracy**: 90%+ for structured resume data extraction
|
| 259 |
+
- **Supported Languages**: English (primary)
|
| 260 |
+
- **Concurrent Processing**: 3-5 resumes simultaneously
|
| 261 |
+
- **File Size Limit**: 10MB per resume (recommended)
|
| 262 |
+
|
| 263 |
+
## 🔐 Security Notes
|
| 264 |
+
|
| 265 |
+
- **API Keys**: Never commit API keys to version control
|
| 266 |
+
- **Data Privacy**: Resume data is processed in-memory only
|
| 267 |
+
- **GROQ API**: Data is sent to GROQ servers for processing
|
| 268 |
+
- **Local Storage**: No permanent data storage by default
|
| 269 |
+
|
| 270 |
+
## 🚀 Production Recommendations
|
| 271 |
+
|
| 272 |
+
1. **Environment Variables**: Use `.env` files or Vercel environment variables
|
| 273 |
+
2. **Error Handling**: Monitor logs for processing errors
|
| 274 |
+
3. **Rate Limiting**: Implement request throttling for high volume
|
| 275 |
+
4. **Data Validation**: Validate file uploads before processing
|
| 276 |
+
5. **Performance Monitoring**: Track processing times and success rates
|
| 277 |
+
|
| 278 |
+
## 📞 Support
|
| 279 |
+
|
| 280 |
+
For issues and questions:
|
| 281 |
+
1. Check the troubleshooting section above
|
| 282 |
+
2. Verify all prerequisites are met
|
| 283 |
+
3. Ensure API keys are valid and have quota
|
| 284 |
+
4. Check CrewAI documentation: https://docs.crewai.com/
|
| 285 |
+
5. Check GROQ API documentation: https://console.groq.com/docs
|
| 286 |
+
|
| 287 |
+
## 📄 License
|
| 288 |
+
|
| 289 |
+
MIT License - Feel free to modify and use for your projects.
|
| 290 |
+
|
| 291 |
---
|
| 292 |
+
|
| 293 |
+
**🤖 HireWithAI - Powered by CrewAI & GROQ API**
|
| 294 |
+
*Intelligent Multi-Agent Resume Screening System*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
space_repo/.env.example
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# HireWithAI Environment Variables
|
| 2 |
+
# Copy this file to .env and fill in your actual values
|
| 3 |
+
|
| 4 |
+
# GROQ API Configuration
|
| 5 |
+
GROQ_API_KEY=ygsk_ppvi5ZsoU6NcekD3gueDWGdyb3FY9ROnF7a1O9yx0A32180D37oV
|
| 6 |
+
|
| 7 |
+
# Optional: Default model to use
|
| 8 |
+
DEFAULT_MODEL=groq/llama-3.1-8b-instant
|
| 9 |
+
|
| 10 |
+
# Optional: Application settings
|
| 11 |
+
DEBUG=False
|
| 12 |
+
MAX_FILE_SIZE=10485760
|
| 13 |
+
MAX_CONCURRENT_REQUESTS=5
|
| 14 |
+
|
| 15 |
+
# Optional: Streamlit configuration
|
| 16 |
+
STREAMLIT_SERVER_PORT=8501
|
| 17 |
+
STREAMLIT_SERVER_HEADLESS=true
|
space_repo/.github/workflows/deploy.yml
ADDED
|
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: Deploy to Hugging Face Spaces
|
| 2 |
+
|
| 3 |
+
on:
|
| 4 |
+
push:
|
| 5 |
+
branches:
|
| 6 |
+
- main # Change if you use master
|
| 7 |
+
|
| 8 |
+
jobs:
|
| 9 |
+
deploy:
|
| 10 |
+
runs-on: ubuntu-latest
|
| 11 |
+
|
| 12 |
+
steps:
|
| 13 |
+
# 1. Checkout GitHub repo
|
| 14 |
+
- name: Checkout Repository
|
| 15 |
+
uses: actions/checkout@v3
|
| 16 |
+
|
| 17 |
+
# 2. Configure git user
|
| 18 |
+
- name: Configure Git
|
| 19 |
+
run: |
|
| 20 |
+
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
| 21 |
+
git config --global user.name "github-actions[bot]"
|
| 22 |
+
|
| 23 |
+
# 3. Clone Hugging Face Space
|
| 24 |
+
- name: Clone Hugging Face Space
|
| 25 |
+
env:
|
| 26 |
+
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
| 27 |
+
run: |
|
| 28 |
+
git clone https://Jayanthk2004:$HF_TOKEN@huggingface.co/spaces/Jayanthk2004/HireWithAi space_repo
|
| 29 |
+
|
| 30 |
+
# 4. Copy all repo files to Space (avoid nested space_repo)
|
| 31 |
+
- name: Copy Files
|
| 32 |
+
run: |
|
| 33 |
+
rsync -av --exclude='.git' ./ space_repo/
|
| 34 |
+
|
| 35 |
+
# 5. Install dependencies (optional: test build)
|
| 36 |
+
- name: Install Dependencies
|
| 37 |
+
run: |
|
| 38 |
+
pip install --upgrade pip
|
| 39 |
+
pip install -r space_repo/requirements.txt
|
| 40 |
+
|
| 41 |
+
# 6. Commit and push changes to Hugging Face Space
|
| 42 |
+
- name: Push to Hugging Face Space
|
| 43 |
+
env:
|
| 44 |
+
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
| 45 |
+
run: |
|
| 46 |
+
cd space_repo
|
| 47 |
+
git add .
|
| 48 |
+
git commit -m "Deploy from GitHub Actions" || echo "No changes to commit"
|
| 49 |
+
git push https://Jayanthk2004:$HF_TOKEN@huggingface.co/spaces/Jayanthk2004/HireWithAi main
|
space_repo/.gitignore
ADDED
|
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Python
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[cod]
|
| 4 |
+
*$py.class
|
| 5 |
+
*.so
|
| 6 |
+
.Python
|
| 7 |
+
build/
|
| 8 |
+
develop-eggs/
|
| 9 |
+
dist/
|
| 10 |
+
downloads/
|
| 11 |
+
eggs/
|
| 12 |
+
.eggs/
|
| 13 |
+
lib/
|
| 14 |
+
lib64/
|
| 15 |
+
parts/
|
| 16 |
+
sdist/
|
| 17 |
+
var/
|
| 18 |
+
wheels/
|
| 19 |
+
share/python-wheels/
|
| 20 |
+
*.egg-info/
|
| 21 |
+
.installed.cfg
|
| 22 |
+
*.egg
|
| 23 |
+
MANIFEST
|
| 24 |
+
|
| 25 |
+
# Virtual Environment
|
| 26 |
+
venv/
|
| 27 |
+
env/
|
| 28 |
+
ENV/
|
| 29 |
+
.venv/
|
| 30 |
+
.env/
|
| 31 |
+
|
| 32 |
+
# Environment variables
|
| 33 |
+
.env
|
| 34 |
+
.env.local
|
| 35 |
+
.env.development.local
|
| 36 |
+
.env.test.local
|
| 37 |
+
.env.production.local
|
| 38 |
+
|
| 39 |
+
# Streamlit
|
| 40 |
+
.streamlit/
|
| 41 |
+
|
| 42 |
+
# IDE
|
| 43 |
+
.vscode/
|
| 44 |
+
.idea/
|
| 45 |
+
*.swp
|
| 46 |
+
*.swo
|
| 47 |
+
*~
|
| 48 |
+
|
| 49 |
+
# OS
|
| 50 |
+
.DS_Store
|
| 51 |
+
.DS_Store?
|
| 52 |
+
._*
|
| 53 |
+
.Spotlight-V100
|
| 54 |
+
.Trashes
|
| 55 |
+
ehthumbs.db
|
| 56 |
+
Thumbs.db
|
| 57 |
+
|
| 58 |
+
# Logs
|
| 59 |
+
*.log
|
| 60 |
+
|
| 61 |
+
# Temporary files
|
| 62 |
+
temp/
|
| 63 |
+
tmp/
|
| 64 |
+
*.tmp
|
| 65 |
+
|
| 66 |
+
# Uploaded files (if storing locally)
|
| 67 |
+
uploads/
|
| 68 |
+
resumes/
|
| 69 |
+
|
| 70 |
+
# Model files (if downloaded locally)
|
| 71 |
+
*.model
|
| 72 |
+
*.pkl
|
| 73 |
+
|
| 74 |
+
# Vercel
|
| 75 |
+
.vercel
|
space_repo/README.md
CHANGED
|
@@ -1,19 +1,294 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
-
|
| 19 |
-
|
|
|
|
| 1 |
+
# HireWithAI - Smart Resume Screening System 🤖
|
| 2 |
+
|
| 3 |
+
An AI-powered recruitment platform built with **multi-agent architecture** using CrewAI and GROQ API for fast inference. This system automates resume screening and ranking, reducing **~70% of time-to-hire**.
|
| 4 |
+
|
| 5 |
+
## 🚀 Features
|
| 6 |
+
|
| 7 |
+
- **Multi-Agent Architecture**: 3 specialized AI agents working together
|
| 8 |
+
- **Resume Parser Agent**: Extracts structured candidate data from PDF/DOCX/TXT files
|
| 9 |
+
- **Skill Matcher Agent**: Matches extracted skills to job descriptions using NLP
|
| 10 |
+
- **Ranking Agent**: Ranks candidates based on relevance and generates shortlist
|
| 11 |
+
- **Fast Inference**: Powered by GROQ API for rapid processing
|
| 12 |
+
- **User-Friendly Interface**: Simple Streamlit web interface
|
| 13 |
+
- **Multiple File Formats**: Supports PDF, DOCX, and TXT resume uploads
|
| 14 |
+
- **Real-time Processing**: Live progress tracking and results display
|
| 15 |
+
- **Exportable Results**: Download analysis results in JSON format
|
| 16 |
+
|
| 17 |
+
## 🏗️ Architecture
|
| 18 |
+
|
| 19 |
+
```
|
| 20 |
+
┌─────────────────────────────────────────┐
|
| 21 |
+
│ Streamlit Frontend │
|
| 22 |
+
├─────────────────────────────────────────┤
|
| 23 |
+
│ CrewAI Core │
|
| 24 |
+
├─────────────────┬───────────────────────┤
|
| 25 |
+
│ Resume Parser │ Skill Matcher Agent │
|
| 26 |
+
│ Agent │ │
|
| 27 |
+
├─────────────────┼───────────────────────┤
|
| 28 |
+
│ Ranking Agent │
|
| 29 |
+
├─────────────────────────────────────────┤
|
| 30 |
+
│ GROQ API │
|
| 31 |
+
└─────────────────────────────────────────┘
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
## 📋 Prerequisites
|
| 35 |
+
|
| 36 |
+
- **Python 3.10-3.13** (Python 3.11 recommended)
|
| 37 |
+
- **Windows 10/11** (for this guide)
|
| 38 |
+
- **VS Code** installed
|
| 39 |
+
- **GROQ API Key** (free from https://console.groq.com/)
|
| 40 |
+
|
| 41 |
+
## 🛠️ Local Setup Instructions (Windows)
|
| 42 |
+
|
| 43 |
+
### Step 1: Clone/Download the Project
|
| 44 |
+
|
| 45 |
+
1. Create a new folder for your project:
|
| 46 |
+
```cmd
|
| 47 |
+
mkdir HireWithAI
|
| 48 |
+
cd HireWithAI
|
| 49 |
+
```
|
| 50 |
+
|
| 51 |
+
2. Save the provided files (`app.py`, `requirements.txt`, `vercel.json`) in this folder
|
| 52 |
+
|
| 53 |
+
### Step 2: Set Up Python Virtual Environment in VS Code
|
| 54 |
+
|
| 55 |
+
1. **Open VS Code in project folder**:
|
| 56 |
+
```cmd
|
| 57 |
+
code .
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
2. **Create Virtual Environment**:
|
| 61 |
+
- Open Command Palette: `Ctrl+Shift+P`
|
| 62 |
+
- Type: `Python: Create Environment`
|
| 63 |
+
- Select: `Venv`
|
| 64 |
+
- Choose your Python interpreter (3.10+)
|
| 65 |
+
- Select `requirements.txt` when prompted
|
| 66 |
+
|
| 67 |
+
**OR using terminal**:
|
| 68 |
+
```cmd
|
| 69 |
+
python -m venv venv
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
3. **Activate Virtual Environment**:
|
| 73 |
+
|
| 74 |
+
**In VS Code Terminal**:
|
| 75 |
+
```cmd
|
| 76 |
+
venv\Scripts\activate
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
**If you get execution policy error on Windows**:
|
| 80 |
+
```powershell
|
| 81 |
+
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
|
| 82 |
+
venv\Scripts\activate
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
4. **Verify activation** - you should see `(venv)` in your terminal prompt
|
| 86 |
+
|
| 87 |
+
### Step 3: Install Dependencies
|
| 88 |
+
|
| 89 |
+
1. **Upgrade pip**:
|
| 90 |
+
```cmd
|
| 91 |
+
python -m pip install --upgrade pip
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
2. **Install requirements**:
|
| 95 |
+
```cmd
|
| 96 |
+
pip install -r requirements.txt
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
3. **Install spaCy English model**:
|
| 100 |
+
```cmd
|
| 101 |
+
python -m spacy download en_core_web_sm
|
| 102 |
+
```
|
| 103 |
+
|
| 104 |
+
### Step 4: Set Up GROQ API Key
|
| 105 |
+
|
| 106 |
+
1. **Get GROQ API Key**:
|
| 107 |
+
- Visit https://console.groq.com/
|
| 108 |
+
- Create a free account
|
| 109 |
+
- Go to "API Keys" section
|
| 110 |
+
- Click "Create API Key"
|
| 111 |
+
- Copy the generated key
|
| 112 |
+
|
| 113 |
+
2. **Create environment file** (optional):
|
| 114 |
+
Create `.env` file in project root:
|
| 115 |
+
```env
|
| 116 |
+
GROQ_API_KEY=your_api_key_here
|
| 117 |
+
```
|
| 118 |
+
|
| 119 |
+
### Step 5: Run the Application
|
| 120 |
+
|
| 121 |
+
1. **Start Streamlit app**:
|
| 122 |
+
```cmd
|
| 123 |
+
streamlit run app.py
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
2. **Open in browser**:
|
| 127 |
+
- The app will automatically open at `http://localhost:8501`
|
| 128 |
+
- If not, click the link shown in terminal
|
| 129 |
+
|
| 130 |
+
3. **Enter your GROQ API key** in the sidebar when the app loads
|
| 131 |
+
|
| 132 |
+
## 🌐 Vercel Deployment Instructions
|
| 133 |
+
|
| 134 |
+
### Step 1: Prepare for Deployment
|
| 135 |
+
|
| 136 |
+
1. **Install Vercel CLI**:
|
| 137 |
+
```cmd
|
| 138 |
+
npm install -g vercel
|
| 139 |
+
```
|
| 140 |
+
|
| 141 |
+
2. **Ensure all files are ready**:
|
| 142 |
+
```
|
| 143 |
+
HireWithAI/
|
| 144 |
+
├── app.py
|
| 145 |
+
├── requirements.txt
|
| 146 |
+
├── vercel.json
|
| 147 |
+
└── README.md
|
| 148 |
+
```
|
| 149 |
+
|
| 150 |
+
### Step 2: Initialize Git Repository
|
| 151 |
|
| 152 |
+
```cmd
|
| 153 |
+
git init
|
| 154 |
+
git add .
|
| 155 |
+
git commit -m "Initial commit: HireWithAI app"
|
| 156 |
+
```
|
| 157 |
|
| 158 |
+
### Step 3: Deploy to Vercel
|
| 159 |
+
|
| 160 |
+
1. **Login to Vercel**:
|
| 161 |
+
```cmd
|
| 162 |
+
vercel login
|
| 163 |
+
```
|
| 164 |
+
|
| 165 |
+
2. **Deploy**:
|
| 166 |
+
```cmd
|
| 167 |
+
vercel
|
| 168 |
+
```
|
| 169 |
+
|
| 170 |
+
3. **Follow the prompts**:
|
| 171 |
+
- Set up and deploy? `Y`
|
| 172 |
+
- Which scope? (select your account)
|
| 173 |
+
- Link to existing project? `N`
|
| 174 |
+
- Project name? `hirewithia` (or your preferred name)
|
| 175 |
+
- Directory? `./` (current directory)
|
| 176 |
+
|
| 177 |
+
4. **Set environment variables** (in Vercel dashboard):
|
| 178 |
+
- Go to your project dashboard on vercel.com
|
| 179 |
+
- Navigate to "Settings" → "Environment Variables"
|
| 180 |
+
- Add: `GROQ_API_KEY` with your API key value
|
| 181 |
+
|
| 182 |
+
### Step 4: Production Deployment
|
| 183 |
+
|
| 184 |
+
```cmd
|
| 185 |
+
vercel --prod
|
| 186 |
+
```
|
| 187 |
+
|
| 188 |
+
## 📖 Usage Guide
|
| 189 |
+
|
| 190 |
+
### 1. Enter Job Description
|
| 191 |
+
- Navigate to the "Job Description" tab
|
| 192 |
+
- Paste the job requirements and description
|
| 193 |
+
- Click "Save Job Description"
|
| 194 |
+
|
| 195 |
+
### 2. Upload Resumes
|
| 196 |
+
- Go to "Upload Resumes" tab
|
| 197 |
+
- Upload multiple PDF/DOCX/TXT resume files
|
| 198 |
+
- Click "Process Resumes" to start AI analysis
|
| 199 |
+
|
| 200 |
+
### 3. View Results
|
| 201 |
+
- Check the "Results" tab for:
|
| 202 |
+
- Candidate rankings with scores
|
| 203 |
+
- Detailed skill analysis
|
| 204 |
+
- Individual candidate breakdowns
|
| 205 |
+
- Downloadable results
|
| 206 |
+
|
| 207 |
+
## 🔧 Configuration Options
|
| 208 |
+
|
| 209 |
+
### GROQ Models Available:
|
| 210 |
+
- **llama-3.1-70b-versatile** (Recommended) - Best accuracy
|
| 211 |
+
- **llama-3.1-8b-instant** (Fastest) - Quick processing
|
| 212 |
+
- **mixtral-8x7b-32768** - Alternative model
|
| 213 |
+
|
| 214 |
+
### Supported File Formats:
|
| 215 |
+
- **PDF** - Most common resume format
|
| 216 |
+
- **DOCX** - Microsoft Word documents
|
| 217 |
+
- **TXT** - Plain text files
|
| 218 |
+
|
| 219 |
+
## 🐛 Troubleshooting
|
| 220 |
+
|
| 221 |
+
### Common Issues:
|
| 222 |
+
|
| 223 |
+
1. **spaCy model not found**:
|
| 224 |
+
```cmd
|
| 225 |
+
python -m spacy download en_core_web_sm
|
| 226 |
+
```
|
| 227 |
+
|
| 228 |
+
2. **GROQ API errors**:
|
| 229 |
+
- Verify your API key is correct
|
| 230 |
+
- Check your GROQ account quota
|
| 231 |
+
- Ensure stable internet connection
|
| 232 |
+
|
| 233 |
+
3. **PDF parsing issues**:
|
| 234 |
+
- Ensure PDFs are not image-only
|
| 235 |
+
- Try converting to DOCX if needed
|
| 236 |
+
- Check file is not corrupted
|
| 237 |
+
|
| 238 |
+
4. **Streamlit not starting**:
|
| 239 |
+
- Ensure virtual environment is activated
|
| 240 |
+
- Check all dependencies are installed
|
| 241 |
+
- Try: `pip install streamlit --upgrade`
|
| 242 |
+
|
| 243 |
+
5. **Vercel deployment issues**:
|
| 244 |
+
- Ensure `vercel.json` is configured correctly
|
| 245 |
+
- Check Python version compatibility
|
| 246 |
+
- Verify all environment variables are set
|
| 247 |
+
|
| 248 |
+
### Performance Tips:
|
| 249 |
+
|
| 250 |
+
- Use **llama-3.1-8b-instant** for faster processing
|
| 251 |
+
- Process smaller batches of resumes (5-10 at a time)
|
| 252 |
+
- Ensure stable internet for GROQ API calls
|
| 253 |
+
- Use SSD storage for better file processing
|
| 254 |
+
|
| 255 |
+
## 📊 Expected Performance
|
| 256 |
+
|
| 257 |
+
- **Processing Time**: 30-60 seconds per resume
|
| 258 |
+
- **Accuracy**: 90%+ for structured resume data extraction
|
| 259 |
+
- **Supported Languages**: English (primary)
|
| 260 |
+
- **Concurrent Processing**: 3-5 resumes simultaneously
|
| 261 |
+
- **File Size Limit**: 10MB per resume (recommended)
|
| 262 |
+
|
| 263 |
+
## 🔐 Security Notes
|
| 264 |
+
|
| 265 |
+
- **API Keys**: Never commit API keys to version control
|
| 266 |
+
- **Data Privacy**: Resume data is processed in-memory only
|
| 267 |
+
- **GROQ API**: Data is sent to GROQ servers for processing
|
| 268 |
+
- **Local Storage**: No permanent data storage by default
|
| 269 |
+
|
| 270 |
+
## 🚀 Production Recommendations
|
| 271 |
+
|
| 272 |
+
1. **Environment Variables**: Use `.env` files or Vercel environment variables
|
| 273 |
+
2. **Error Handling**: Monitor logs for processing errors
|
| 274 |
+
3. **Rate Limiting**: Implement request throttling for high volume
|
| 275 |
+
4. **Data Validation**: Validate file uploads before processing
|
| 276 |
+
5. **Performance Monitoring**: Track processing times and success rates
|
| 277 |
+
|
| 278 |
+
## 📞 Support
|
| 279 |
+
|
| 280 |
+
For issues and questions:
|
| 281 |
+
1. Check the troubleshooting section above
|
| 282 |
+
2. Verify all prerequisites are met
|
| 283 |
+
3. Ensure API keys are valid and have quota
|
| 284 |
+
4. Check CrewAI documentation: https://docs.crewai.com/
|
| 285 |
+
5. Check GROQ API documentation: https://console.groq.com/docs
|
| 286 |
+
|
| 287 |
+
## 📄 License
|
| 288 |
+
|
| 289 |
+
MIT License - Feel free to modify and use for your projects.
|
| 290 |
+
|
| 291 |
+
---
|
| 292 |
|
| 293 |
+
**🤖 HireWithAI - Powered by CrewAI & GROQ API**
|
| 294 |
+
*Intelligent Multi-Agent Resume Screening System*
|
space_repo/app.py
ADDED
|
@@ -0,0 +1,641 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
HireWithAI - Smart Resume Screening System
|
| 3 |
+
A CrewAI-powered multi-agent recruitment platform using GROQ API for fast inference.
|
| 4 |
+
|
| 5 |
+
This single-file application includes three AI agents:
|
| 6 |
+
1. Resume Parser Agent - Extracts structured data from resumes
|
| 7 |
+
2. Skill Matcher Agent - Matches skills to job descriptions
|
| 8 |
+
3. Ranking Agent - Ranks candidates based on relevance
|
| 9 |
+
|
| 10 |
+
Author: AI Developer
|
| 11 |
+
License: MIT
|
| 12 |
+
"""
|
| 13 |
+
|
| 14 |
+
import streamlit as st
|
| 15 |
+
import tempfile
|
| 16 |
+
import os
|
| 17 |
+
import json
|
| 18 |
+
import pandas as pd
|
| 19 |
+
from pathlib import Path
|
| 20 |
+
from typing import Dict, List, Any
|
| 21 |
+
import warnings
|
| 22 |
+
import time
|
| 23 |
+
import asyncio
|
| 24 |
+
from datetime import datetime, timedelta
|
| 25 |
+
|
| 26 |
+
# Suppress warnings
|
| 27 |
+
warnings.filterwarnings("ignore")
|
| 28 |
+
|
| 29 |
+
# Core imports for AI agents
|
| 30 |
+
try:
|
| 31 |
+
from crewai import Agent, Task, Crew, LLM
|
| 32 |
+
from crewai.project import CrewBase, agent, crew, task
|
| 33 |
+
from groq import Groq
|
| 34 |
+
import spacy
|
| 35 |
+
from spacy.matcher import PhraseMatcher
|
| 36 |
+
import PyPDF2
|
| 37 |
+
import docx2txt
|
| 38 |
+
import re
|
| 39 |
+
import hashlib
|
| 40 |
+
except ImportError as e:
|
| 41 |
+
st.error(f"Missing required dependency: {e}")
|
| 42 |
+
st.stop()
|
| 43 |
+
|
| 44 |
+
# Configuration - Single model only
|
| 45 |
+
GROQ_MODEL = "llama-3.1-8b-instant"
|
| 46 |
+
MODEL_DISPLAY_NAME = "Llama 3.1 8B (Fastest)"
|
| 47 |
+
|
| 48 |
+
# Rate limiting configuration
|
| 49 |
+
RATE_LIMIT_DELAY = 3 # seconds between requests
|
| 50 |
+
MAX_RETRIES = 3
|
| 51 |
+
BATCH_SIZE = 2 # Process resumes in smaller batches
|
| 52 |
+
|
| 53 |
+
# Initialize session state
|
| 54 |
+
if 'processed_resumes' not in st.session_state:
|
| 55 |
+
st.session_state.processed_resumes = []
|
| 56 |
+
if 'ranked_candidates' not in st.session_state:
|
| 57 |
+
st.session_state.ranked_candidates = []
|
| 58 |
+
if 'job_description' not in st.session_state:
|
| 59 |
+
st.session_state.job_description = ""
|
| 60 |
+
|
| 61 |
+
class RateLimitHandler:
|
| 62 |
+
"""Handle rate limiting for API calls"""
|
| 63 |
+
|
| 64 |
+
def __init__(self, delay=RATE_LIMIT_DELAY):
|
| 65 |
+
self.delay = delay
|
| 66 |
+
self.last_request_time = 0
|
| 67 |
+
|
| 68 |
+
def wait_if_needed(self):
|
| 69 |
+
"""Wait if necessary to respect rate limits"""
|
| 70 |
+
current_time = time.time()
|
| 71 |
+
time_since_last_request = current_time - self.last_request_time
|
| 72 |
+
|
| 73 |
+
if time_since_last_request < self.delay:
|
| 74 |
+
sleep_time = self.delay - time_since_last_request
|
| 75 |
+
st.info(f"⏳ Rate limit protection: waiting {sleep_time:.1f}s...")
|
| 76 |
+
time.sleep(sleep_time)
|
| 77 |
+
|
| 78 |
+
self.last_request_time = time.time()
|
| 79 |
+
|
| 80 |
+
class ResumeProcessor:
|
| 81 |
+
"""Utility class for processing resume files"""
|
| 82 |
+
|
| 83 |
+
@staticmethod
|
| 84 |
+
def extract_text_from_pdf(file_buffer) -> str:
|
| 85 |
+
"""Extract text from PDF file"""
|
| 86 |
+
try:
|
| 87 |
+
pdf_reader = PyPDF2.PdfReader(file_buffer)
|
| 88 |
+
text = ""
|
| 89 |
+
for page in pdf_reader.pages:
|
| 90 |
+
text += page.extract_text() + "\n"
|
| 91 |
+
return text.strip()
|
| 92 |
+
except Exception as e:
|
| 93 |
+
st.error(f"Error reading PDF: {e}")
|
| 94 |
+
return ""
|
| 95 |
+
|
| 96 |
+
@staticmethod
|
| 97 |
+
def extract_text_from_docx(file_buffer) -> str:
|
| 98 |
+
"""Extract text from DOCX file"""
|
| 99 |
+
try:
|
| 100 |
+
with tempfile.NamedTemporaryFile(delete=False, suffix='.docx') as tmp_file:
|
| 101 |
+
tmp_file.write(file_buffer.read())
|
| 102 |
+
tmp_file.flush()
|
| 103 |
+
text = docx2txt.process(tmp_file.name)
|
| 104 |
+
os.unlink(tmp_file.name)
|
| 105 |
+
return text.strip()
|
| 106 |
+
except Exception as e:
|
| 107 |
+
st.error(f"Error reading DOCX: {e}")
|
| 108 |
+
return ""
|
| 109 |
+
|
| 110 |
+
@staticmethod
|
| 111 |
+
def extract_text_from_txt(file_buffer) -> str:
|
| 112 |
+
"""Extract text from TXT file"""
|
| 113 |
+
try:
|
| 114 |
+
return file_buffer.read().decode('utf-8').strip()
|
| 115 |
+
except Exception as e:
|
| 116 |
+
st.error(f"Error reading TXT: {e}")
|
| 117 |
+
return ""
|
| 118 |
+
|
| 119 |
+
class HireWithAICrew:
|
| 120 |
+
"""Main CrewAI multi-agent system for resume screening with rate limiting"""
|
| 121 |
+
|
| 122 |
+
def __init__(self, groq_api_key: str):
|
| 123 |
+
"""Initialize the crew with GROQ API and rate limiting"""
|
| 124 |
+
self.llm = LLM(
|
| 125 |
+
model=f"groq/{GROQ_MODEL}",
|
| 126 |
+
api_key=groq_api_key,
|
| 127 |
+
temperature=0.1
|
| 128 |
+
)
|
| 129 |
+
|
| 130 |
+
self.rate_limiter = RateLimitHandler()
|
| 131 |
+
|
| 132 |
+
# Initialize spaCy for NLP operations
|
| 133 |
+
try:
|
| 134 |
+
self.nlp = spacy.load("en_core_web_sm")
|
| 135 |
+
except OSError:
|
| 136 |
+
st.error("spaCy English model not found. Please install it with: python -m spacy download en_core_web_sm")
|
| 137 |
+
st.stop()
|
| 138 |
+
|
| 139 |
+
def create_resume_parser_agent(self) -> Agent:
|
| 140 |
+
"""Create the Resume Parser Agent"""
|
| 141 |
+
return Agent(
|
| 142 |
+
role='Resume Parser Specialist',
|
| 143 |
+
goal='Extract key candidate information from resume text efficiently',
|
| 144 |
+
backstory="""You are an expert resume parser focused on extracting essential
|
| 145 |
+
candidate information quickly and accurately. You prioritize the most important
|
| 146 |
+
details: name, contact info, skills, experience, and education.""",
|
| 147 |
+
llm=self.llm,
|
| 148 |
+
verbose=True,
|
| 149 |
+
allow_delegation=False
|
| 150 |
+
)
|
| 151 |
+
|
| 152 |
+
def create_skill_matcher_agent(self) -> Agent:
|
| 153 |
+
"""Create the Skill Matcher Agent"""
|
| 154 |
+
return Agent(
|
| 155 |
+
role='Skill Matching Expert',
|
| 156 |
+
goal='Efficiently match candidate skills with job requirements',
|
| 157 |
+
backstory="""You are a skilled matcher who quickly identifies relevant skills
|
| 158 |
+
and calculates match percentages. You focus on the most critical skills
|
| 159 |
+
and provide concise, actionable insights.""",
|
| 160 |
+
llm=self.llm,
|
| 161 |
+
verbose=True,
|
| 162 |
+
allow_delegation=False
|
| 163 |
+
)
|
| 164 |
+
|
| 165 |
+
def create_ranking_agent(self) -> Agent:
|
| 166 |
+
"""Create the Ranking Agent"""
|
| 167 |
+
return Agent(
|
| 168 |
+
role='Candidate Ranking Analyst',
|
| 169 |
+
goal='Rank candidates efficiently based on key criteria',
|
| 170 |
+
backstory="""You are a recruitment analyst who creates fast, accurate candidate
|
| 171 |
+
rankings. You focus on the most important factors: skills match, experience
|
| 172 |
+
relevance, and overall job fit.""",
|
| 173 |
+
llm=self.llm,
|
| 174 |
+
verbose=True,
|
| 175 |
+
allow_delegation=False
|
| 176 |
+
)
|
| 177 |
+
|
| 178 |
+
def create_concise_parsing_task(self, resume_text: str, filename: str) -> Task:
|
| 179 |
+
"""Create a more concise parsing task to reduce token usage"""
|
| 180 |
+
# Truncate resume text if too long to save tokens
|
| 181 |
+
max_chars = 2000
|
| 182 |
+
truncated_text = resume_text[:max_chars] + "..." if len(resume_text) > max_chars else resume_text
|
| 183 |
+
|
| 184 |
+
return Task(
|
| 185 |
+
description=f"""
|
| 186 |
+
Extract key information from this resume in JSON format:
|
| 187 |
+
|
| 188 |
+
Resume: {filename}
|
| 189 |
+
Text: {truncated_text}
|
| 190 |
+
|
| 191 |
+
Extract:
|
| 192 |
+
1. Name and contact (email, phone)
|
| 193 |
+
2. Key skills (top 5-8 most relevant)
|
| 194 |
+
3. Experience summary (years, key roles)
|
| 195 |
+
4. Education (degree, field)
|
| 196 |
+
5. Notable achievements
|
| 197 |
+
|
| 198 |
+
Keep response concise and structured.
|
| 199 |
+
""",
|
| 200 |
+
expected_output="Concise JSON with essential candidate information",
|
| 201 |
+
agent=self.create_resume_parser_agent()
|
| 202 |
+
)
|
| 203 |
+
|
| 204 |
+
def create_concise_skill_matching_task(self, resume_data: str, job_description: str) -> Task:
|
| 205 |
+
"""Create a more concise skill matching task"""
|
| 206 |
+
# Truncate job description if too long
|
| 207 |
+
max_jd_chars = 1000
|
| 208 |
+
truncated_jd = job_description[:max_jd_chars] + "..." if len(job_description) > max_jd_chars else job_description
|
| 209 |
+
|
| 210 |
+
return Task(
|
| 211 |
+
description=f"""
|
| 212 |
+
Analyze skill match between candidate and job:
|
| 213 |
+
|
| 214 |
+
Job Requirements: {truncated_jd}
|
| 215 |
+
Candidate Data: {resume_data}
|
| 216 |
+
|
| 217 |
+
Provide:
|
| 218 |
+
1. Match percentage (0-100%)
|
| 219 |
+
2. Top 5 matching skills
|
| 220 |
+
3. Top 3 missing critical skills
|
| 221 |
+
4. Experience level fit (1-10)
|
| 222 |
+
|
| 223 |
+
Keep analysis concise and focused.
|
| 224 |
+
""",
|
| 225 |
+
expected_output="Concise skill matching analysis in JSON format",
|
| 226 |
+
agent=self.create_skill_matcher_agent()
|
| 227 |
+
)
|
| 228 |
+
|
| 229 |
+
def safe_crew_execution(self, crew, max_retries=MAX_RETRIES):
|
| 230 |
+
"""Execute crew with retry logic for rate limits"""
|
| 231 |
+
for attempt in range(max_retries):
|
| 232 |
+
try:
|
| 233 |
+
self.rate_limiter.wait_if_needed()
|
| 234 |
+
result = crew.kickoff()
|
| 235 |
+
return result
|
| 236 |
+
except Exception as e:
|
| 237 |
+
error_str = str(e).lower()
|
| 238 |
+
if "rate limit" in error_str or "ratelimit" in error_str:
|
| 239 |
+
if attempt < max_retries - 1:
|
| 240 |
+
wait_time = (attempt + 1) * 5 # Progressive backoff
|
| 241 |
+
st.warning(f"Rate limit hit. Retrying in {wait_time}s... (Attempt {attempt + 1}/{max_retries})")
|
| 242 |
+
time.sleep(wait_time)
|
| 243 |
+
continue
|
| 244 |
+
else:
|
| 245 |
+
st.error("Maximum retries reached. Please try again later or upgrade your Groq plan.")
|
| 246 |
+
return None
|
| 247 |
+
else:
|
| 248 |
+
st.error(f"Error during processing: {e}")
|
| 249 |
+
return None
|
| 250 |
+
return None
|
| 251 |
+
|
| 252 |
+
def process_resumes_with_batching(self, resumes_data: List[Dict], job_description: str) -> Dict:
|
| 253 |
+
"""Process resumes in smaller batches to avoid rate limits"""
|
| 254 |
+
try:
|
| 255 |
+
all_parsed_resumes = []
|
| 256 |
+
all_skill_analysis = []
|
| 257 |
+
|
| 258 |
+
# Process resumes in batches
|
| 259 |
+
total_resumes = len(resumes_data)
|
| 260 |
+
batches = [resumes_data[i:i + BATCH_SIZE] for i in range(0, total_resumes, BATCH_SIZE)]
|
| 261 |
+
|
| 262 |
+
progress_bar = st.progress(0)
|
| 263 |
+
progress_text = st.empty()
|
| 264 |
+
|
| 265 |
+
for batch_idx, batch in enumerate(batches):
|
| 266 |
+
progress_text.text(f"Processing batch {batch_idx + 1} of {len(batches)}...")
|
| 267 |
+
|
| 268 |
+
# Step 1: Parse resumes in current batch
|
| 269 |
+
for resume_idx, resume_data in enumerate(batch):
|
| 270 |
+
overall_progress = (batch_idx * BATCH_SIZE + resume_idx) / total_resumes
|
| 271 |
+
progress_bar.progress(overall_progress)
|
| 272 |
+
|
| 273 |
+
parsing_task = self.create_concise_parsing_task(
|
| 274 |
+
resume_data['text'],
|
| 275 |
+
resume_data['filename']
|
| 276 |
+
)
|
| 277 |
+
|
| 278 |
+
parsing_crew = Crew(
|
| 279 |
+
agents=[self.create_resume_parser_agent()],
|
| 280 |
+
tasks=[parsing_task],
|
| 281 |
+
verbose=False # Reduce verbosity to save tokens
|
| 282 |
+
)
|
| 283 |
+
|
| 284 |
+
result = self.safe_crew_execution(parsing_crew)
|
| 285 |
+
if result:
|
| 286 |
+
all_parsed_resumes.append({
|
| 287 |
+
'filename': resume_data['filename'],
|
| 288 |
+
'parsed_data': result.raw,
|
| 289 |
+
'original_text': resume_data['text'][:500] # Store only first 500 chars
|
| 290 |
+
})
|
| 291 |
+
|
| 292 |
+
# Step 2: Skill matching for current batch
|
| 293 |
+
for resume in all_parsed_resumes[-len(batch):]: # Only process newly added resumes
|
| 294 |
+
skill_task = self.create_concise_skill_matching_task(
|
| 295 |
+
resume['parsed_data'],
|
| 296 |
+
job_description
|
| 297 |
+
)
|
| 298 |
+
|
| 299 |
+
skill_crew = Crew(
|
| 300 |
+
agents=[self.create_skill_matcher_agent()],
|
| 301 |
+
tasks=[skill_task],
|
| 302 |
+
verbose=False
|
| 303 |
+
)
|
| 304 |
+
|
| 305 |
+
result = self.safe_crew_execution(skill_crew)
|
| 306 |
+
if result:
|
| 307 |
+
all_skill_analysis.append({
|
| 308 |
+
'filename': resume['filename'],
|
| 309 |
+
'skill_analysis': result.raw,
|
| 310 |
+
'parsed_data': resume['parsed_data']
|
| 311 |
+
})
|
| 312 |
+
|
| 313 |
+
progress_bar.progress(1.0)
|
| 314 |
+
progress_text.text("Finalizing rankings...")
|
| 315 |
+
|
| 316 |
+
# Step 3: Final ranking (only if we have successful analyses)
|
| 317 |
+
if all_skill_analysis:
|
| 318 |
+
# Create a more concise ranking task
|
| 319 |
+
ranking_task = Task(
|
| 320 |
+
description=f"""
|
| 321 |
+
Rank these candidates for the job. Provide top 5 ranked candidates with scores.
|
| 322 |
+
|
| 323 |
+
Job: {job_description[:500]}...
|
| 324 |
+
|
| 325 |
+
Candidates: {json.dumps([sa['skill_analysis'] for sa in all_skill_analysis[:5]], indent=1)}
|
| 326 |
+
|
| 327 |
+
Provide concise ranking with:
|
| 328 |
+
1. Candidate name and rank
|
| 329 |
+
2. Overall score (0-100)
|
| 330 |
+
3. Key strengths (2-3 points)
|
| 331 |
+
4. Brief recommendation
|
| 332 |
+
""",
|
| 333 |
+
expected_output="Concise candidate ranking with top recommendations",
|
| 334 |
+
agent=self.create_ranking_agent()
|
| 335 |
+
)
|
| 336 |
+
|
| 337 |
+
ranking_crew = Crew(
|
| 338 |
+
agents=[self.create_ranking_agent()],
|
| 339 |
+
tasks=[ranking_task],
|
| 340 |
+
verbose=False
|
| 341 |
+
)
|
| 342 |
+
|
| 343 |
+
ranking_result = self.safe_crew_execution(ranking_crew)
|
| 344 |
+
final_ranking = ranking_result.raw if ranking_result else "Ranking failed due to rate limits"
|
| 345 |
+
else:
|
| 346 |
+
final_ranking = "No candidates could be analyzed due to rate limits"
|
| 347 |
+
|
| 348 |
+
return {
|
| 349 |
+
'parsed_resumes': all_parsed_resumes,
|
| 350 |
+
'skill_analysis': all_skill_analysis,
|
| 351 |
+
'final_ranking': final_ranking
|
| 352 |
+
}
|
| 353 |
+
|
| 354 |
+
except Exception as e:
|
| 355 |
+
st.error(f"Error processing resumes: {e}")
|
| 356 |
+
return {}
|
| 357 |
+
|
| 358 |
+
def main():
|
| 359 |
+
"""Main Streamlit application"""
|
| 360 |
+
|
| 361 |
+
# Page config
|
| 362 |
+
st.set_page_config(
|
| 363 |
+
page_title="HireWithAI - Smart Resume Screening",
|
| 364 |
+
page_icon="🤖",
|
| 365 |
+
layout="wide",
|
| 366 |
+
initial_sidebar_state="expanded"
|
| 367 |
+
)
|
| 368 |
+
|
| 369 |
+
# Custom CSS
|
| 370 |
+
st.markdown("""
|
| 371 |
+
<style>
|
| 372 |
+
.main-header {
|
| 373 |
+
text-align: center;
|
| 374 |
+
padding: 2rem 0;
|
| 375 |
+
background: linear-gradient(90deg, #667eea 0%, #764ba2 100%);
|
| 376 |
+
color: white;
|
| 377 |
+
border-radius: 10px;
|
| 378 |
+
margin-bottom: 2rem;
|
| 379 |
+
}
|
| 380 |
+
.rate-limit-info {
|
| 381 |
+
background: #fff3cd;
|
| 382 |
+
border: 1px solid #ffeaa7;
|
| 383 |
+
border-radius: 8px;
|
| 384 |
+
padding: 1rem;
|
| 385 |
+
margin: 1rem 0;
|
| 386 |
+
}
|
| 387 |
+
</style>
|
| 388 |
+
""", unsafe_allow_html=True)
|
| 389 |
+
|
| 390 |
+
# Header
|
| 391 |
+
st.markdown("""
|
| 392 |
+
<div class="main-header">
|
| 393 |
+
<h1>🤖 HireWithAI - Smart Resume Screening System</h1>
|
| 394 |
+
<p>AI-Powered Multi-Agent Recruitment Platform</p>
|
| 395 |
+
<p><i>Reduce 70% of time-to-hire with automated resume screening and ranking</i></p>
|
| 396 |
+
</div>
|
| 397 |
+
""", unsafe_allow_html=True)
|
| 398 |
+
|
| 399 |
+
# Sidebar configuration
|
| 400 |
+
with st.sidebar:
|
| 401 |
+
st.header("⚙️ Configuration")
|
| 402 |
+
|
| 403 |
+
# GROQ API Key
|
| 404 |
+
groq_api_key = st.text_input(
|
| 405 |
+
"GROQ API Key",
|
| 406 |
+
type="password",
|
| 407 |
+
help="Get your free API key from https://console.groq.com/"
|
| 408 |
+
)
|
| 409 |
+
|
| 410 |
+
if not groq_api_key:
|
| 411 |
+
st.warning("Please enter your GROQ API key to continue")
|
| 412 |
+
st.info("💡 **Get Free GROQ API Key:**\n1. Visit https://console.groq.com/\n2. Create an account\n3. Generate API key\n4. Paste it above")
|
| 413 |
+
return
|
| 414 |
+
|
| 415 |
+
# Model info (fixed model)
|
| 416 |
+
st.markdown("### 🤖 AI Model")
|
| 417 |
+
st.info(f"**Using:** {MODEL_DISPLAY_NAME}")
|
| 418 |
+
st.caption("Optimized for speed and efficiency")
|
| 419 |
+
|
| 420 |
+
# Rate limiting info
|
| 421 |
+
st.markdown("### ⚡ Rate Limiting")
|
| 422 |
+
st.markdown("""
|
| 423 |
+
<div class="rate-limit-info">
|
| 424 |
+
<strong>🛡️ Built-in Protection:</strong><br>
|
| 425 |
+
• Smart batch processing<br>
|
| 426 |
+
• Automatic retry logic<br>
|
| 427 |
+
• Progressive delays<br>
|
| 428 |
+
• Token usage optimization
|
| 429 |
+
</div>
|
| 430 |
+
""", unsafe_allow_html=True)
|
| 431 |
+
|
| 432 |
+
# Processing statistics
|
| 433 |
+
st.header("📊 Statistics")
|
| 434 |
+
col1, col2 = st.columns(2)
|
| 435 |
+
with col1:
|
| 436 |
+
st.metric("Resumes Processed", len(st.session_state.processed_resumes))
|
| 437 |
+
with col2:
|
| 438 |
+
st.metric("Candidates Ranked", len(st.session_state.ranked_candidates) if st.session_state.ranked_candidates else 0)
|
| 439 |
+
|
| 440 |
+
# Main content tabs
|
| 441 |
+
tab1, tab2, tab3 = st.tabs(["📝 Job Description", "📄 Upload Resumes", "🏆 Results"])
|
| 442 |
+
|
| 443 |
+
# Tab 1: Job Description
|
| 444 |
+
with tab1:
|
| 445 |
+
st.header("📝 Job Description")
|
| 446 |
+
st.write("Paste the job description that candidates will be evaluated against:")
|
| 447 |
+
|
| 448 |
+
job_description = st.text_area(
|
| 449 |
+
"Job Description",
|
| 450 |
+
value=st.session_state.job_description,
|
| 451 |
+
height=300,
|
| 452 |
+
placeholder="""Example:
|
| 453 |
+
We are looking for a Senior Python Developer with experience in:
|
| 454 |
+
- 5+ years of Python development
|
| 455 |
+
- Experience with Django/Flask frameworks
|
| 456 |
+
- Knowledge of databases (PostgreSQL, MongoDB)
|
| 457 |
+
- Understanding of REST APIs and microservices
|
| 458 |
+
- Experience with cloud platforms (AWS, GCP, Azure)
|
| 459 |
+
- Strong problem-solving skills
|
| 460 |
+
- Bachelor's degree in Computer Science or related field
|
| 461 |
+
"""
|
| 462 |
+
)
|
| 463 |
+
|
| 464 |
+
if st.button("💾 Save Job Description", type="primary"):
|
| 465 |
+
st.session_state.job_description = job_description
|
| 466 |
+
st.success("✅ Job description saved successfully!")
|
| 467 |
+
|
| 468 |
+
# Tab 2: Resume Upload
|
| 469 |
+
with tab2:
|
| 470 |
+
st.header("📄 Upload Candidate Resumes")
|
| 471 |
+
|
| 472 |
+
if not st.session_state.job_description:
|
| 473 |
+
st.warning("⚠️ Please add a job description first in the 'Job Description' tab")
|
| 474 |
+
return
|
| 475 |
+
|
| 476 |
+
# Rate limiting advice
|
| 477 |
+
st.markdown("""
|
| 478 |
+
<div class="rate-limit-info">
|
| 479 |
+
<strong>💡 Tips for Best Results:</strong><br>
|
| 480 |
+
• Upload 2-5 resumes at a time for optimal processing<br>
|
| 481 |
+
• Larger batches will be automatically split and processed with delays<br>
|
| 482 |
+
• The system includes built-in rate limit protection<br>
|
| 483 |
+
</div>
|
| 484 |
+
""", unsafe_allow_html=True)
|
| 485 |
+
|
| 486 |
+
# File uploader
|
| 487 |
+
uploaded_files = st.file_uploader(
|
| 488 |
+
"Choose resume files",
|
| 489 |
+
type=['pdf', 'docx', 'txt'],
|
| 490 |
+
accept_multiple_files=True,
|
| 491 |
+
help="Supported formats: PDF, DOCX, TXT. Recommended: 2-5 files per batch"
|
| 492 |
+
)
|
| 493 |
+
|
| 494 |
+
if uploaded_files:
|
| 495 |
+
file_count = len(uploaded_files)
|
| 496 |
+
st.write(f"📁 **{file_count} files uploaded**")
|
| 497 |
+
|
| 498 |
+
if file_count > 5:
|
| 499 |
+
st.info(f"ℹ️ You've uploaded {file_count} files. They will be processed in batches of {BATCH_SIZE} with automatic delays to respect rate limits.")
|
| 500 |
+
|
| 501 |
+
# Display uploaded files
|
| 502 |
+
for file in uploaded_files:
|
| 503 |
+
st.write(f"• {file.name} ({file.size} bytes)")
|
| 504 |
+
|
| 505 |
+
# Process button
|
| 506 |
+
if st.button("🚀 Process Resumes", type="primary", disabled=not uploaded_files):
|
| 507 |
+
if not groq_api_key:
|
| 508 |
+
st.error("Please provide GROQ API key")
|
| 509 |
+
return
|
| 510 |
+
|
| 511 |
+
with st.spinner("🔄 Processing resumes with rate limit protection... This may take a few minutes..."):
|
| 512 |
+
try:
|
| 513 |
+
# Initialize the crew
|
| 514 |
+
crew = HireWithAICrew(groq_api_key)
|
| 515 |
+
|
| 516 |
+
# Extract text from uploaded files
|
| 517 |
+
resumes_data = []
|
| 518 |
+
processor = ResumeProcessor()
|
| 519 |
+
|
| 520 |
+
for uploaded_file in uploaded_files:
|
| 521 |
+
file_extension = uploaded_file.name.split('.')[-1].lower()
|
| 522 |
+
|
| 523 |
+
# Reset file pointer
|
| 524 |
+
uploaded_file.seek(0)
|
| 525 |
+
|
| 526 |
+
if file_extension == 'pdf':
|
| 527 |
+
text = processor.extract_text_from_pdf(uploaded_file)
|
| 528 |
+
elif file_extension == 'docx':
|
| 529 |
+
text = processor.extract_text_from_docx(uploaded_file)
|
| 530 |
+
elif file_extension == 'txt':
|
| 531 |
+
text = processor.extract_text_from_txt(uploaded_file)
|
| 532 |
+
else:
|
| 533 |
+
st.warning(f"Unsupported file format: {uploaded_file.name}")
|
| 534 |
+
continue
|
| 535 |
+
|
| 536 |
+
if text:
|
| 537 |
+
resumes_data.append({
|
| 538 |
+
'filename': uploaded_file.name,
|
| 539 |
+
'text': text
|
| 540 |
+
})
|
| 541 |
+
|
| 542 |
+
if not resumes_data:
|
| 543 |
+
st.error("No valid resumes could be processed")
|
| 544 |
+
return
|
| 545 |
+
|
| 546 |
+
# Process through AI agents with batching
|
| 547 |
+
st.info("🤖 Running AI agents with intelligent batching and rate limiting...")
|
| 548 |
+
results = crew.process_resumes_with_batching(resumes_data, st.session_state.job_description)
|
| 549 |
+
|
| 550 |
+
if results:
|
| 551 |
+
st.session_state.processed_resumes = results.get('parsed_resumes', [])
|
| 552 |
+
st.session_state.ranked_candidates = results.get('final_ranking', '')
|
| 553 |
+
|
| 554 |
+
st.success("✅ Resume processing completed successfully!")
|
| 555 |
+
st.info("📋 Check the 'Results' tab to view the analysis")
|
| 556 |
+
else:
|
| 557 |
+
st.error("Failed to process resumes due to rate limits or API issues")
|
| 558 |
+
|
| 559 |
+
except Exception as e:
|
| 560 |
+
st.error(f"Error: {str(e)}")
|
| 561 |
+
|
| 562 |
+
# Tab 3: Results
|
| 563 |
+
with tab3:
|
| 564 |
+
st.header("🏆 Results & Rankings")
|
| 565 |
+
|
| 566 |
+
if not st.session_state.processed_resumes:
|
| 567 |
+
st.info("📋 No results available. Please process resumes first.")
|
| 568 |
+
return
|
| 569 |
+
|
| 570 |
+
# Display results
|
| 571 |
+
col1, col2 = st.columns([2, 1])
|
| 572 |
+
|
| 573 |
+
with col1:
|
| 574 |
+
st.subheader("📊 Candidate Rankings")
|
| 575 |
+
|
| 576 |
+
if st.session_state.ranked_candidates:
|
| 577 |
+
st.markdown("### 🥇 Final Rankings")
|
| 578 |
+
st.text_area(
|
| 579 |
+
"Ranking Results",
|
| 580 |
+
value=st.session_state.ranked_candidates,
|
| 581 |
+
height=400
|
| 582 |
+
)
|
| 583 |
+
|
| 584 |
+
# Download results
|
| 585 |
+
if st.button("💾 Download Results"):
|
| 586 |
+
results_data = {
|
| 587 |
+
'job_description': st.session_state.job_description,
|
| 588 |
+
'processed_resumes': st.session_state.processed_resumes,
|
| 589 |
+
'final_ranking': st.session_state.ranked_candidates,
|
| 590 |
+
'timestamp': datetime.now().isoformat(),
|
| 591 |
+
'model_used': MODEL_DISPLAY_NAME
|
| 592 |
+
}
|
| 593 |
+
|
| 594 |
+
st.download_button(
|
| 595 |
+
label="📥 Download Complete Results (JSON)",
|
| 596 |
+
data=json.dumps(results_data, indent=2),
|
| 597 |
+
file_name=f"hirewithia_results_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json",
|
| 598 |
+
mime="application/json"
|
| 599 |
+
)
|
| 600 |
+
|
| 601 |
+
with col2:
|
| 602 |
+
st.subheader("📈 Summary")
|
| 603 |
+
|
| 604 |
+
if st.session_state.processed_resumes:
|
| 605 |
+
st.metric("Total Candidates", len(st.session_state.processed_resumes))
|
| 606 |
+
|
| 607 |
+
summary_data = []
|
| 608 |
+
for resume in st.session_state.processed_resumes:
|
| 609 |
+
summary_data.append({
|
| 610 |
+
'Filename': resume['filename'][:20] + "..." if len(resume['filename']) > 20 else resume['filename'],
|
| 611 |
+
'Status': '✅ Processed'
|
| 612 |
+
})
|
| 613 |
+
|
| 614 |
+
df = pd.DataFrame(summary_data)
|
| 615 |
+
st.dataframe(df, use_container_width=True)
|
| 616 |
+
|
| 617 |
+
# Individual candidate details
|
| 618 |
+
if st.session_state.processed_resumes:
|
| 619 |
+
st.subheader("📋 Individual Candidate Analysis")
|
| 620 |
+
|
| 621 |
+
for i, resume in enumerate(st.session_state.processed_resumes):
|
| 622 |
+
with st.expander(f"👤 {resume['filename']}"):
|
| 623 |
+
st.markdown("**Parsed Data:**")
|
| 624 |
+
st.text_area(
|
| 625 |
+
f"Analysis for {resume['filename']}",
|
| 626 |
+
value=resume['parsed_data'],
|
| 627 |
+
height=200,
|
| 628 |
+
key=f"resume_{i}"
|
| 629 |
+
)
|
| 630 |
+
|
| 631 |
+
# Footer
|
| 632 |
+
st.markdown("---")
|
| 633 |
+
st.markdown(f"""
|
| 634 |
+
<div style='text-align: center; color: #666; margin-top: 2rem;'>
|
| 635 |
+
<p>🤖 <b>HireWithAI</b> - Powered by CrewAI & GROQ API</p>
|
| 636 |
+
<p><i>Using {MODEL_DISPLAY_NAME} with Rate Limit Protection</i></p>
|
| 637 |
+
</div>
|
| 638 |
+
""", unsafe_allow_html=True)
|
| 639 |
+
|
| 640 |
+
if __name__ == "__main__":
|
| 641 |
+
main()
|
space_repo/hirewithia_test_results.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"job_description": "We are looking for a highly motivated AI Intern to join our team.\n\n\nKey Responsibilities\n\nCollaborate with sales and product teams to gather customer requirements and design AI-powered solution architectures.\nDeliver compelling technical demos, workshops, and proofs-of-concept (POCs) using LLMs, RAG pipelines, and prompt engineering.\nTranslate technical concepts into clear, business value\u2013focused narratives that resonate with both technical and executive stakeholders.\nSupport enterprise clients in overcoming AI adoption challenges such as security, scalability, ROI, and compliance.\nBuild and maintain presales assets including reference architectures, demo scripts, pitch decks, and technical documentation.\nAct as a bridge between engineering teams and business leaders, ensuring alignment and clarity.\n\n\nRequired Skills\n\nHands-on experience with Generative AI & LLMs (fine-tuning, Retrieval-Augmented Generation, prompt engineering).\nProficiency in Python programming (APIs, frameworks, and problem-solving).\nFamiliarity with AI frameworks like LangChain, Hugging Face, OpenAI APIs.\nExcellent communication, presentation, and storytelling abilities.\nProven ability to explain complex AI concepts to both technical and non-technical stakeholders.\nStrong demonstration skills, with the ability to showcase ROI through real-world use cases.\nHigh IQ & EQ: strong curiosity, active listening, and the ability to ask the right questions at the right time.\n\n\nPreferred/Teachable Skills\n\nStrategic & Business Acumen: understanding of enterprise AI adoption challenges (security, compliance, scalability).\nSolution Architecture & Integration: exposure to enterprise systems, APIs, and cloud environments.\nAwareness of go-to-market strategies and customer enablement practices.\n\n\nWhat You\u2019ll Gain\n\nHands-on experience designing and delivering enterprise-grade AI solutions.\nDirect exposure to client interactions, workshops, and presales strategy.\nMentorship from senior AI, product, and presales professionals.\nOpportunity for conversion to a full-time AI Presales Engineer role based on performance.\n\n\n\n\nIf you believe you are qualified and are looking forward to setting your career on a fast-track, apply by submitting a few paragraphs explaining why you believe you are the right person for this role.\n\n\n\nTo know more about xyz, visit our website: www.xyz.com\n\n\n\nIf you believe you are qualified and are looking forward to setting your career on a fast-track, apply by submitting a few paragraphs explaining why you believe you are the right person for this role.To know more about xyz, visit our website: www.xyz.com\n\n\nAbout xyz:\n\nxyz is a next gen AI consulting firm on track to become one of the most admired brands in the world for \"AI done right\". Our purpose is to harness our expertise in novel technologies to deliver more profits for our enterprise clients while helping them deliver a better human experience for the communities they serve.\n\n\nAt xyz, we build custom AI solutions that produce revolutionary outcomes for enterprises worldwide. Specializing in \"AI Done Right,\" we leverage our expertise and proprietary IP to transform operations and help achieve business goals efficiently.\n\n\nWe are honored to have recently received the prestigious Inc 500 Best In Business award, a testament to our commitment to excellence. We were also awarded - AI Solution Provider of the Year by The AI Summit 2023, Platinum sponsor at Advantage DoD 2024 Symposium and a lot more exciting stuff! While we are big enough to be trusted by some of the greatest brands in the world, we are small enough to care about delivering meaningful ROI-generating innovation at a guaranteed price for each client that we serve.\n\n\nOur thought leader, Luv Tulsidas, wrote and published a book in collaboration with Forbes, \u201cFailing Fast? Secrets to succeed fast with AI\u201d. Refer here for more details on the content - https://www.luvtulsidas.com/\n\nLet's explore further!\n\nUncover our unique AI accelerators with us:\n\n1. Enterprise LLM Studio: Our no-code DIY AI studio for enterprises. Choose an LLM, connect it to your data, and create an expert-level agent in 20 minutes.\n2. AppMod. AI: Modernizes ancient tech stacks quickly, achieving over 80% autonomy for major brands!\n3. ComputerVision. AI: Our ComputerVision. AI Offers customizable Computer Vision and Audio AI models, plus DIY tools and a Real-Time Co-Pilot for human-AI collaboration!\n4. Robotics and Edge Device Fabrication: Provides comprehensive robotics, hardware fabrication, and AI-integrated edge design services.\n5. RLEF AI Platform: Our proven Reinforcement Learning with Expert Feedback (RLEF) approach bridges Lab-Grade AI to Real-World AI.",
|
| 3 |
+
"processed_resumes": [
|
| 4 |
+
{
|
| 5 |
+
"filename": "pranamya_Fullstack_development .pdf",
|
| 6 |
+
"parsed_data": "```json\n{\n \"name\": \"candidate 1\",\n \"contact\": {\n \"email\": \"candidate 1@gmail.com\",\n \"phone\": \"+91 9381324867\"\n },\n \"key_skills\": [\n \"Python\",\n \"Full Stack\",\n \"Data Analysis\",\n \"UI/UX design\",\n \"Django\",\n \"JavaScript\",\n \"HTML\",\n \"CSS\"\n ],\n \"experience\": {\n \"summary\": \"Group Project(B.Tech) Jan 2024 - April 2024\",\n \"years\": \"1 year\",\n \"key_roles\": \"Python Full Stack Developer\"\n },\n \"education\": [\n {\n \"degree\": \"B.Tech\",\n \"field\": \"Computer Science with Artificial Intelligence\",\n \"institution\": \"Vignan\u2019s Institute of Information Technology(VIIT)\",\n \"location\": \"Visakhapatnam\",\n \"expected_year\": \"2026\"\n },\n {\n \"degree\": \"Intermediate\",\n \"field\": \"MPC\",\n \"institution\": \"Narayana Junior College\",\n \"location\": \"\",\n \"year\": \"2020 - 2022\"\n },\n {\n \"degree\": \"School\",\n \"field\": \"CBSE\",\n \"institution\": \"SR Digi School\",\n \"location\": \"\",\n \"year\": \"2010 - 2020\"\n }\n ],\n \"notable_achievements\": [\n {\n \"project\": \"Hospital Management System\",\n \"description\": \"Developed a hospital management system that improved patient record management, using HTM,CSS,and JavaScript as frontend,Python as backend,and Django as web server framework . Integrated a secure login system, ensuring role-based access control for doctors and admins.\"\n },\n {\n \"project\": \"Personalized Career Development Platform\",\n \"description\": \"Developed a Django-based career platform that provides person- alized job recommendations and skill-building courses using machine learning. Created a mentor-mentee feature for users to connect with industry professionals for guidance and career growth.\"\n },\n {\n \"project\": \"Personalized Mental Health Journal with AI Insights\",\n \"description\": \"Developed a journaling platform with Django and AI- powered sentiment analysis using NLTK. Created visual mood trends using Chart.js and Matplotlib for actionable insights. Secured user data with robust authentication and encryption practices.\"\n }\n ]\n}\n```",
|
| 7 |
+
"original_text": "candidate 1\nPython Full Stack\u2014Data Analysis\u2014UI/UX design\ncandidate 1@gmail.com \u22c4linkedin.com/in/dmspranamya \u22c4+91 9381324867\nOBJECTIVE\nPython full-stack developer with hands-on experience in developing end-to-end web applications using Django and\nJavaScript frameworks. Adept at designing RESTful APIs, managing relational and non-relational databases, and\noptimizing application performance. Seeking to leverage my skills to contribute to impactful projects and grow as a\ndeveloper.\nEDUCA"
|
| 8 |
+
},
|
| 9 |
+
{
|
| 10 |
+
"filename": "Pranamya_AI_intern_resume.pdf",
|
| 11 |
+
"parsed_data": "```json\n{\n \"name\": \"candidate 2\",\n \"contact\": {\n \"email\": \"candidate 1@gmail.com\",\n \"linkedin\": \"linkedin.com/in/dmspranamya\",\n \"github\": \"github.com/candidate 2\"\n },\n \"key_skills\": [\n \"Python\",\n \"Agentic AI\",\n \"LLMs\",\n \"RAG Pipelines\",\n \"CrewAI\",\n \"langChain\",\n \"n8n\",\n \"Problem Solving\",\n \"Team Work\",\n \"Time Management\",\n \"Adaptability\"\n ],\n \"experience_summary\": {\n \"years\": \"Intern\",\n \"key_roles\": \"Full Stack developer\u2013UI/UX designer\u2013AI Agent developer\"\n },\n \"education\": [\n {\n \"degree\": \"B.Tech\",\n \"field\": \"Computer Science with Artificial Intelligence\",\n \"institution\": \"Vignan\u2019s Institute of Information Technology(VIIT)\",\n \"location\": \"Visakhapatnam\",\n \"expected_year\": \"2026\"\n },\n {\n \"degree\": \"Intermediate\",\n \"field\": \"MPC\",\n \"institution\": \"Narayana Junior College\",\n \"location\": \"\",\n \"year\": \"2020 - 2022\"\n },\n {\n \"degree\": \"School\",\n \"field\": \"CBSE\",\n \"institution\": \"SR Digi School\",\n \"location\": \"\",\n \"year\": \"2010 - 2020\"\n }\n ],\n \"notable_achievements\": [\n {\n \"project\": \"Pitch & Invest Platform\",\n \"description\": \"Developed Pitch and Invest Platform, a full-stack web application using the MERN stack that enables entrepreneurs to submit business pitches and investors to browse and fund startups. Implemented role-based authentication,and deployed the app for scalability and performance. [Live Demo]\"\n },\n {\n \"project\": \"TalentSift AI \u2013 Smart Resume Screening System\",\n \"description\": \"AI-powered recruitment platform built with multi-agent architecture (Resume Parser, Skill Matcher, Ranking agents) using CrewAI and NLP. Reduces 70 percent in time-to-hire, streamlined hiring with automated resume analysis and ranking.\"\n },\n {\n \"event\": \"AI Agents Bootcamp: Think, Act, Learn\",\n \"description\": \"Organized and delivered a 3-day AI Agents Bootcamp: Think, Act, Learn through the club initiative, attended by 50+ participants. Conducted sessions on AI foundations, LLMs, AI Agents, no code workflows (n8n), RAG pipelines, and the CrewAI framework.\"\n }\n ]\n}\n```",
|
| 12 |
+
"original_text": "candidate 2\nFull Stack developer\u2013UI/UX designer\u2013AI Agent developer\ncandidate 2@gmail.com \u22c4linkedin.com/in/dmspranamya \u22c4github.com/candidate 2\nOBJECTIVE\nI am an aspiring AI Agent Developer with strong foundations in LLMs, RAG pipelines, and multi-agent frameworks\nlike CrewAI. Passionate about building intelligent, privacy-conscious systems. I am seeking an opportunity to apply\nmy skills and creativity in developing impactful AI agents that enhance user experiences and drive innovatio"
|
| 13 |
+
}
|
| 14 |
+
],
|
| 15 |
+
"final_ranking": "**Candidate Ranking for AI Presales Intern Position**\n\n| Rank | Candidate | Overall Score (0-100) | Key Strengths | Brief Recommendation |\n| --- | --- | --- | --- | --- |\n| 1 | Candidate 2 | 85 | Strong match in LLMs, RAG Pipelines, and relevant experience level (6) | Highly recommended for the role, with some minor gaps in Generative AI solutions and prompt engineering. |\n| 2 | Candidate 1 | 62 | Good match in Python, Django, and relevant experience level (4) | Recommended for the role, but requires development in Generative AI, LLMs, and RAG pipelines. |\n\n**Ranking Explanation:**\n\nCandidate 2 has a higher overall score due to a stronger match in critical skills (LLMs, RAG Pipelines) and a higher experience level fit. Although they have some gaps in Generative AI solutions and prompt engineering, their strengths outweigh their weaknesses.\n\nCandidate 1 has a lower overall score due to a lower match percentage and a lower experience level fit. However, they have a good match in Python and Django, making them a viable candidate with some development needs.\n\n**Recommendations:**\n\nFor Candidate 2, provide training or mentorship in Generative AI solutions and prompt engineering to bridge the gaps.\n\nFor Candidate 1, provide training or mentorship in Generative AI, LLMs, and RAG pipelines to develop their skills and increase their overall score.",
|
| 16 |
+
"timestamp": "2025-09-04T18:36:41.077822",
|
| 17 |
+
"model_used": "Llama 3.1 8B (Fastest)"
|
| 18 |
+
}
|
space_repo/requirements.txt
CHANGED
|
@@ -1,3 +1,41 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# HireWithAI - Smart Resume Screening System
|
| 2 |
+
# Requirements file for all necessary dependencies
|
| 3 |
+
|
| 4 |
+
# Core Streamlit framework
|
| 5 |
+
streamlit>=1.28.0
|
| 6 |
+
|
| 7 |
+
# CrewAI and GROQ API
|
| 8 |
+
crewai>=0.40.0
|
| 9 |
+
groq>=0.4.0
|
| 10 |
+
|
| 11 |
+
# NLP and text processing
|
| 12 |
+
spacy>=3.7.0
|
| 13 |
+
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1.tar.gz
|
| 14 |
+
|
| 15 |
+
# Resume parsing libraries
|
| 16 |
+
PyPDF2>=3.0.1
|
| 17 |
+
docx2txt>=0.8
|
| 18 |
+
python-docx>=0.8.11
|
| 19 |
+
pdfminer.six>=20231228
|
| 20 |
+
|
| 21 |
+
# Data processing and utilities
|
| 22 |
+
pandas>=2.0.0
|
| 23 |
+
numpy>=1.24.0
|
| 24 |
+
python-dateutil>=2.8.2
|
| 25 |
+
|
| 26 |
+
# Environment and configuration
|
| 27 |
+
python-dotenv>=1.0.0
|
| 28 |
+
|
| 29 |
+
# Additional utilities
|
| 30 |
+
pathlib2>=2.3.7
|
| 31 |
+
typing-extensions>=4.5.0
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
# Optional: For enhanced NLP features
|
| 35 |
+
nltk>=3.8.1
|
| 36 |
+
scikit-learn>=1.3.0
|
| 37 |
+
|
| 38 |
+
# Optional: For better error handling
|
| 39 |
+
requests>=2.31.0
|
| 40 |
+
urllib3>=2.0.0
|
| 41 |
+
|
space_repo/space_repo/.gitattributes
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
space_repo/space_repo/Dockerfile
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
FROM python:3.13.5-slim
|
| 2 |
+
|
| 3 |
+
WORKDIR /app
|
| 4 |
+
|
| 5 |
+
RUN apt-get update && apt-get install -y \
|
| 6 |
+
build-essential \
|
| 7 |
+
curl \
|
| 8 |
+
git \
|
| 9 |
+
&& rm -rf /var/lib/apt/lists/*
|
| 10 |
+
|
| 11 |
+
COPY requirements.txt ./
|
| 12 |
+
COPY src/ ./src/
|
| 13 |
+
|
| 14 |
+
RUN pip3 install -r requirements.txt
|
| 15 |
+
|
| 16 |
+
EXPOSE 8501
|
| 17 |
+
|
| 18 |
+
HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health
|
| 19 |
+
|
| 20 |
+
ENTRYPOINT ["streamlit", "run", "src/streamlit_app.py", "--server.port=8501", "--server.address=0.0.0.0"]
|
space_repo/space_repo/README.md
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: HireWithAi
|
| 3 |
+
emoji: 🚀
|
| 4 |
+
colorFrom: red
|
| 5 |
+
colorTo: red
|
| 6 |
+
sdk: docker
|
| 7 |
+
app_port: 8501
|
| 8 |
+
tags:
|
| 9 |
+
- streamlit
|
| 10 |
+
pinned: false
|
| 11 |
+
short_description: Streamlit template space
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# Welcome to Streamlit!
|
| 15 |
+
|
| 16 |
+
Edit `/src/streamlit_app.py` to customize this app to your heart's desire. :heart:
|
| 17 |
+
|
| 18 |
+
If you have any questions, checkout our [documentation](https://docs.streamlit.io) and [community
|
| 19 |
+
forums](https://discuss.streamlit.io).
|
space_repo/space_repo/requirements.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
altair
|
| 2 |
+
pandas
|
| 3 |
+
streamlit
|
space_repo/space_repo/src/streamlit_app.py
ADDED
|
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import altair as alt
|
| 2 |
+
import numpy as np
|
| 3 |
+
import pandas as pd
|
| 4 |
+
import streamlit as st
|
| 5 |
+
|
| 6 |
+
"""
|
| 7 |
+
# Welcome to Streamlit!
|
| 8 |
+
|
| 9 |
+
Edit `/streamlit_app.py` to customize this app to your heart's desire :heart:.
|
| 10 |
+
If you have any questions, checkout our [documentation](https://docs.streamlit.io) and [community
|
| 11 |
+
forums](https://discuss.streamlit.io).
|
| 12 |
+
|
| 13 |
+
In the meantime, below is an example of what you can do with just a few lines of code:
|
| 14 |
+
"""
|
| 15 |
+
|
| 16 |
+
num_points = st.slider("Number of points in spiral", 1, 10000, 1100)
|
| 17 |
+
num_turns = st.slider("Number of turns in spiral", 1, 300, 31)
|
| 18 |
+
|
| 19 |
+
indices = np.linspace(0, 1, num_points)
|
| 20 |
+
theta = 2 * np.pi * num_turns * indices
|
| 21 |
+
radius = indices
|
| 22 |
+
|
| 23 |
+
x = radius * np.cos(theta)
|
| 24 |
+
y = radius * np.sin(theta)
|
| 25 |
+
|
| 26 |
+
df = pd.DataFrame({
|
| 27 |
+
"x": x,
|
| 28 |
+
"y": y,
|
| 29 |
+
"idx": indices,
|
| 30 |
+
"rand": np.random.randn(num_points),
|
| 31 |
+
})
|
| 32 |
+
|
| 33 |
+
st.altair_chart(alt.Chart(df, height=700, width=700)
|
| 34 |
+
.mark_point(filled=True)
|
| 35 |
+
.encode(
|
| 36 |
+
x=alt.X("x", axis=None),
|
| 37 |
+
y=alt.Y("y", axis=None),
|
| 38 |
+
color=alt.Color("idx", legend=None, scale=alt.Scale()),
|
| 39 |
+
size=alt.Size("rand", legend=None, scale=alt.Scale(range=[1, 150])),
|
| 40 |
+
))
|
space_repo/vercel.json
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"version": 2,
|
| 3 |
+
"builds": [
|
| 4 |
+
{
|
| 5 |
+
"src": "app.py",
|
| 6 |
+
"use": "@vercel/python"
|
| 7 |
+
}
|
| 8 |
+
],
|
| 9 |
+
"routes": [
|
| 10 |
+
{
|
| 11 |
+
"src": "/(.*)",
|
| 12 |
+
"dest": "/app.py"
|
| 13 |
+
}
|
| 14 |
+
],
|
| 15 |
+
"functions": {
|
| 16 |
+
"app.py": {
|
| 17 |
+
"maxDuration": 60
|
| 18 |
+
}
|
| 19 |
+
},
|
| 20 |
+
"env": {
|
| 21 |
+
"PYTHONPATH": "./"
|
| 22 |
+
}
|
| 23 |
+
}
|