Spaces:
Sleeping
title: Resfit
emoji: π
colorFrom: blue
colorTo: yellow
sdk: docker
pinned: false
license: apache-2.0
short_description: AI resume tailor adapts CV to any JD while preserving links.
app_port: 8501
ResFit: Resume Tailor AI π
ResFit is a powerful Streamlit application that leverages advanced Large Language Models (LLMs) to intelligently tailor your resume for specific job descriptions. By analyzing your existing resume and the target job requirements, it rewrites content to highlight relevant skills and experiences, generating a professionally formatted PDF using LaTeX.
Why ResFit? The main motivation behind this project was to solve a common problem with existing resume tailoring tools: they often strip out or break hyperlinks. ResFit is designed specifically to preserve all the links (portfolio, LinkedIn, GitHub, etc.) that you've carefully added to your original resume.
π Features
- Link Preservation: Unlike many other tools, ResFit ensures all your hyperlinks remain intact in the final PDF.
- Multi-Provider Support: Choose your preferred AI model from Google Gemini, Anthropic Claude, or OpenAI.
- Intelligent Tailoring: Uses structured prompting to rewrite resume sections (Summary, Experience, Skills, Projects) specifically for the target role.
- High-Performance: Built with
asyncioand parallel processing to tailor multiple sections concurrently for fast results. - Professional Output: Generates high-quality, ATS-friendly PDFs using LaTeX templates.
- Live Feedback: Real-time logging interface shows you exactly what the AI is working on.
- Dual Export: Download both the final PDF and the raw LaTeX (.tex) source code for further manual editing.
- Dockerized: Ready-to-deploy container with all dependencies, including a full LaTeX environment.
π οΈ Tech Stack
- Frontend: Streamlit
- LLM Orchestration: Instructor
- PDF Processing: PyMuPDF4LLM
- Document Generation: LaTeX (via
pdflatex) & Jinja2 templating - Concurrency: Python
asyncio&Semaphores
ποΈ Architecture
%%{init: {
'theme': 'base',
'themeVariables': {
'primaryColor': '#E1F5FE',
'primaryTextColor': '#01579B',
'lineColor': '#546E7A',
'clusterBkg': '#FAFAFA',
'clusterBorder': '#CFD8DC'
},
'flowchart': {
'curve': 'linear',
'nodeSpacing': 50,
'rankSpacing': 60
}
}}%%
graph TD
%% === STYLING DEFINITIONS ===
classDef user fill:#fff9c4,stroke:#fbc02d,stroke-width:2px,rx:10;
classDef ui fill:#e1f5fe,stroke:#0288d1,stroke-width:2px,rx:5;
classDef ai fill:#ffe0b2,stroke:#f57c00,stroke-width:2px,rx:10;
classDef process fill:#ffffff,stroke:#78909c,stroke-width:2px,rx:5;
classDef data fill:#e1bee7,stroke:#8e24aa,stroke-width:2px,shape:cylinder;
classDef output fill:#c8e6c9,stroke:#2e7d32,stroke-width:2px,rx:5;
%% === THE DIAGRAM ===
%% 1. USER INTERFACE LAYER
subgraph UI_Layer ["π₯οΈ Frontend / Interface"]
User([π€ User]):::user
Streamlit[/"π» Streamlit UI"/]:::ui
LLM["π§ LLM Provider<br/>(OpenAI / Gemini / Claude)"]:::ui
User -->|Uploads Files| Streamlit
Streamlit -.->|Configures| LLM
end
%% 2. THE PIPELINE (BACKEND)
subgraph Backend ["βοΈ ResumeTailor Pipeline"]
%% Phase 1: Ingestion
subgraph P1 ["Phase 1: Input Processing"]
Parser["π Resume Parser<br/>(PyMuPDF4LLM)"]:::process
Scraper["π Job Scraper<br/>(Web Engine)"]:::process
end
%% Phase 2: Understanding
subgraph P2 ["Phase 2: AI Orchestration"]
Extractor{{"π€ Data Extractor"}}:::ai
Planner["π Section Planner"]:::process
%% Connecting P1 to P2
Parser --> Extractor
Scraper --> Extractor
Extractor --> Planner
end
%% Phase 3: Writing
subgraph P3 ["Phase 3: Parallel Writing"]
Workers{{"β‘ Async Workers"}}:::ai
S1["π Summary"]:::process
S2["πΌ Experience"]:::process
S3["π οΈ Skills"]:::process
S4["π Projects"]:::process
Planner --> Workers
Workers --> S1
Workers --> S2
Workers --> S3
Workers --> S4
end
%% Phase 4: Assembly
subgraph P4 ["Phase 4: Generation"]
Merger["π Jinja2 Merger"]:::process
Compiler["βοΈ PDF Compiler<br/>(LaTeX)"]:::process
S1 --> Merger
S2 --> Merger
S3 --> Merger
S4 --> Merger
Merger --> Compiler
end
end
%% 3. OUTPUT
Result([π Final PDF]):::output
%% === CROSS CONNECTIONS ===
Streamlit --> Parser
Streamlit --> Scraper
Compiler --> Result
π Prerequisites
- API Keys: You will need an API key from at least one of the supported providers:
- Google AI Studio (Gemini)
- Anthropic Console (Claude)
- OpenAI Platform (GPT)
π³ Quick Start with Docker (Recommended)
The easiest way to run the application is using Docker, as it handles the complex LaTeX dependencies automatically.
Clone the repository
git clone https://github.com/AwaleSajil/resfit cd resfitBuild and Run
docker-compose up --buildAccess the App Open your browser and navigate to
http://localhost:8501.
π» Local Installation
If you prefer to run it locally, you'll need Python 3.12+ and a LaTeX distribution installed on your system.
Install System Dependencies (LaTeX)
- macOS:
brew install --cask mactex-no-gui - Ubuntu/Debian:
sudo apt-get update sudo apt-get install -y texlive-latex-base texlive-fonts-recommended texlive-fonts-extra texlive-latex-extra
- macOS:
Set up Python Environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install -r requirements.txtRun the Application
streamlit run app.py
π Usage Guide
- Select Provider: Choose your AI provider (Gemini, Claude, or OpenAI) from the sidebar and select a specific model (e.g.,
gemini-2.5-pro,claude-3-5-sonnet). - Enter Credentials: Paste your API Key.
- Upload Resume: Upload your current resume in PDF format.
- Job Details:
- Paste a URL to a job posting (the app will scrape it).
- OR paste the raw job description text directly.
- Generate: Click "Tailor Resume".
- Download: Once complete, download your new tailored PDF or the LaTeX source file.
π Project Structure
resumer/
βββ app.py # Main Streamlit application entry point
βββ Dockerfile # Docker configuration
βββ docker-compose.yml # Docker Compose services
βββ requirements.txt # Python dependencies
βββ resumer/ # Core package
β βββ __init__.py # Main pipeline logic (ResumeTailorPipeline)
β βββ structures.py # Pydantic models for structured data
β βββ prompts/ # LLM system prompts
β βββ schemas/ # JSON schemas for extraction
β βββ templates/ # Jinja2 LaTeX templates
β βββ utils/ # Helper functions (PDF parsing, LaTeX ops)
βββ notebooks/ # Jupyter notebooks for testing components
π€ Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
π License
MIT License
π Acknowledgements
This project is inspired by ResumeFlow by Ztrimus.