Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
19arjun89 
posted an update 9 days ago
Post
271
🧠 Introducing AI Recruiting Agent — A Responsible Hiring Assistant in Hugging Face Spaces

Hi everyone! 👋

I’m excited to share a new Hugging Face Space I’ve built: AI Recruiting Agent — a Gradio-powered assistant designed to help automate candidate evaluation and cold email drafting while embedding responsible safeguards into the workflow.

👉 Check it out here:
19arjun89/AI_Recruiting_Agent

🚀 What It Does

This Space helps recruiters and talent teams do two main things:
1. Assess Candidate Fit
• Upload a batch of resumes and company culture documents
• Paste in a job description
• The assistant evaluates each candidate across skills match, culture fit, and produces an actionable hiring recommendation
2. Generate Candidate-Specific Emails
• Upload a single resume + job description
• It produces a professional cold outreach email tailored to that candidate

👥 Who This Is For

This Space is designed for:
• Recruiters and talent teams experimenting with AI-assisted screening
• ML engineers building responsible RAG systems
• Researchers exploring bias mitigation and LLM verification
• Product teams prototyping decision-support tools

🛡️ Built-In Bias Mitigation & Verification

Because AI recruitment comes with real risks, I’ve built several safeguards into the design:

🔹 Input Anonymization
Resumes are sanitized of email, phone numbers, URLs, physical addresses, and other personal identifiers.

🔹 Fact Verification
Every analysis (skills & culture) is verified against source inputs using a custom fact-checking prompt. Claims that aren’t supported are flagged and can trigger a self-correction routine.

🔹 Bias Audit Chain
bias audit prompt that reviews:
• Over-reliance on pedigree or subjective language
• Penalization of nontraditional career paths
• Unsupported reasoning outside the job description or culture docs

These checks aren’t meant to replace human judgment, but to make AI support more transparent and fair.

🙏 Feedback Welcome

I’d love input on:
• Fairness pitfalls or bias signals I may have missed
• Edge cases in resume anonymization
• Verification or audit patterns you’ve seen work well
• Features that would make this more useful in real hiring workflows

Happy to answer technical questions or iterate based on feedback.

Some technical details for those interested:

  • Resume and culture docs are chunked into Chroma vector stores
  • Candidate scoring runs in two independent chains (skills + culture)
  • Each analysis is fact-checked against source inputs
  • Unverified claims can trigger a self-correction routine
  • A separate bias audit prompt triangulates across skills, culture, and the final recommendation

Everything is designed as decision support — not an automated hiring gate.

In this post