Axion / README.md
Mr-Thop's picture
Update README
d3180e6
metadata
title: Axion
emoji: πŸ€–
colorFrom: blue
colorTo: purple
sdk: docker
app_file: app.py
pinned: false

Axion - AI-Powered Hiring Platform

Axion is a comprehensive hiring automation system that streamlines the entire recruitment process from resume parsing to interview scheduling. Built with Flask and powered by Google's Gemini AI models, it helps organizations efficiently match candidates to job requirements and conduct automated evaluations.

What it does

The platform handles the complete hiring workflow:

  • Resume Processing: Automatically parses PDF resumes and converts them into structured data
  • Smart Matching: Uses semantic search to find the best candidates for job descriptions
  • Interview Management: Generates technical and behavioral interview questions
  • Automated Evaluation: Scores candidate responses with detailed feedback
  • Scheduling System: Manages interview slots and sends automated email invitations

Getting Started

Prerequisites

  • Python 3.8+
  • PostgreSQL database (for vector embeddings)
  • MySQL database (for user management)
  • Google AI API key

Installation

  1. Clone the repository
git clone <repository-url>
cd Axion
  1. Install dependencies
pip install -r requirements.txt
  1. Set up your environment variables in .env:
GOOGLE_API_KEY=your_google_api_key
API_KEY=your_embedding_api_key
Connection_String=your_postgresql_connection_string
Collection_Name=Resume
Host=your_mysql_host
Port=your_mysql_port
User=your_mysql_user
Password=your_mysql_password
DB_Name=your_database_name
app_password=your_email_app_password
  1. Create the resume directory
mkdir resume
  1. Run the application
python app.py

API Endpoints

Resume Processing

  • POST /parse - Processes all PDF resumes in the /resume folder and creates embeddings

Candidate Matching

  • POST /match - Finds best matching candidates for a job description
    {
      "job_description": "Software Engineer position requiring Python...",
      "candidates": 5
    }
    

Interview Management

  • POST /interview1 - Get interview questions (0 for behavioral, 1 for technical)
  • POST /interview - Update question sets
  • POST /evaluate - Score candidate responses

Scheduling

  • POST /schedule - Schedule interview slots and send emails
    {
      "date": "2024-03-15",
      "time": "09:00", 
      "length": 30
    }
    

Authentication

  • POST /login-user - Candidate login
  • POST /login-org - Organization login

How it works

  1. Resume Parsing: Upload PDF resumes to the /resume folder. The system extracts text and structures it into JSON format with fields like experience, skills, education, etc.

  2. Embedding Creation: Structured resumes are converted to vector embeddings using Google's text-embedding model and stored in PostgreSQL with pgvector.

  3. Job Matching: When you provide a job description, it gets structured and embedded. The system then finds the most similar candidate profiles using semantic search.

  4. Interview Process: The platform generates relevant questions based on the role type and evaluates responses using AI scoring with detailed feedback.

  5. Scheduling: Automatically creates interview time slots, manages candidate data in MySQL, and sends professional email invitations.

Project Structure

Axion/
β”œβ”€β”€ app.py              # Main Flask application
β”œβ”€β”€ Models.py           # Google Gemini AI integration
β”œβ”€β”€ Parser.py           # PDF resume parsing
β”œβ”€β”€ Embedder.py         # Vector embedding management
β”œβ”€β”€ Evaluator.py        # Interview question generation and scoring
β”œβ”€β”€ Scheduler.py        # Email scheduling system
β”œβ”€β”€ DB.py              # Database connections (PostgreSQL + MySQL)
β”œβ”€β”€ requirements.txt    # Python dependencies
β”œβ”€β”€ Dockerfile         # Container configuration
└── resume/            # Directory for PDF resumes

Database Setup

The system uses two databases:

  • PostgreSQL: Stores vector embeddings for semantic search
  • MySQL: Manages user accounts, scores, and scheduling data

Make sure both databases are properly configured and accessible with the credentials in your .env file.

Contributing

Feel free to submit issues and pull requests. The codebase is designed to be modular, so you can easily extend functionality or integrate with other systems.

License

This project is licensed under the MIT License.