Spaces:
Sleeping
Leave Policy Assistant
An AI-powered HR Leave Policy Assistant that answers leave policy questions and checks leave eligibility. Built with Google ADK (Agent Development Kit), LiteLLM (Groq), FastAPI, and Snowflake.
Features
- Leave policy Q&A β Answers questions about PTO, Sick Leave, Casual Leave, allowances, carryover, notice periods, and eligibility.
- Leave eligibility checks β Uses employee and policy data (Snowflake or in-memory) to determine if a leave request is valid.
- Conversational agent β Maintains session context and asks for missing info (e.g. employee ID, leave type).
- REST API β
/chatand/healthendpoints for integration with frontends or other services. - ADK callbacks β Lifecycle hooks (before/after agent, model, tool) for logging and optional audit.
- Optional audit β Persist callback events to Snowflake via
audit_db.pywhenENABLE_AUDIT_SINK=1.
Project Structure
policy/
βββ main.py # FastAPI app: /chat, /health; optional audit sink at startup
βββ agent.py # ADK agent (LiteLLM/Groq), tools + callbacks
βββ callback.py # ADK callbacks (before/after agent, model, tool); audit sink wiring
βββ prompt.py # Agent name, description, system instruction
βββ sf_tools.py # Snowflake tools: get_employee, get_leave_policy, check_leave_eligibility (default)
βββ policy_tools.py # In-memory tools using data.py (optional; switch in agent.py)
βββ data.py # Static LEAVE_POLICIES and EMPLOYEE_DATA (used by policy_tools)
βββ audit_db.py # Optional Snowflake audit sink; table agent_audit_events
βββ .env # Secrets (not committed)
βββ requirements.txt
βββ README.md
Prerequisites
- Python 3.10+
- Groq API key β for the LLM (e.g.
llama-3.1-8b-instant) - Snowflake β account with:
- Tables:
employees,leave_balances,leave_policies - Optional:
agent_audit_eventsif using audit
- Tables:
Setup
Clone / open the project and go to the
policyfolder:cd policyCreate a virtual environment (recommended):
python -m venv venv venv\Scripts\activate # Windows # source venv/bin/activate # Linux/macOSInstall dependencies:
pip install -r requirements.txtConfigure environment β Create a
.envfile inpolicy/:MODEL=llama-3.1-8b-instant GROQ_API_KEY=your_groq_api_key SNOWFLAKE_USER=your_user SNOWFLAKE_PASSWORD=your_password SNOWFLAKE_ACCOUNT=your_account SNOWFLAKE_ROLE=your_role SNOWFLAKE_WAREHOUSE=your_warehouse SNOWFLAKE_DATABASE=your_database SNOWFLAKE_SCHEMA=your_schema # Optional: persist ADK callback events to Snowflake (table agent_audit_events) ENABLE_AUDIT_SINK=1Replace placeholders with your Groq and Snowflake credentials. Do not commit
.env.
Running the App
Start the FastAPI server:
uvicorn main:app --reload --host 0.0.0.0 --port 8000
- API docs: http://localhost:8000/docs
- Health: http://localhost:8000/health
API Usage
Chat
POST /chat
Request body:
{
"user_id": "user_123",
"message": "What is my PTO balance? My employee ID is E001."
}
Response:
{
"response": "Your PTO balance is 15 days..."
}
Sessions are keyed by user_id (one session per user). The agent uses tools to fetch employee data, leave policies, and check eligibility as needed.
Health
GET /health
Returns {"status": "ok"}.
Agent Tools
The agent is wired to Snowflake by default (sf_tools.py). You can switch to in-memory data by using policy_tools and data.py in agent.py.
| Tool | Description |
|---|---|
get_employee |
Fetches employee details and leave balances (Snowflake or data.py). |
get_leave_policy |
Fetches leave policy for a country and leave type. |
check_leave_eligibility |
Checks if an employee can take a given number of days (balance, policy rules). |
The agent is instructed to always use these tools for employee/policy data and not to guess or fabricate information.
sf_tools.pyβ Requires Snowflake and.envcredentials; reads fromemployees,leave_balances,leave_policies.policy_tools.pyβ Uses in-memoryLEAVE_POLICIESandEMPLOYEE_DATAfromdata.py; no Snowflake needed for tools (Snowflake still needed for optional audit).
ADK Callbacks
The agent uses ADK callbacks (see callback.py) to observe and optionally persist events:
- before_agent / after_agent β before and after the agent runs for a request
- before_model / after_model β before and after each LLM call
- before_tool / after_tool β before and after each tool execution
- on_model_error / on_tool_error β when the model or a tool fails
Callbacks log at DEBUG and, if an audit sink is set, call sink.store(event) for each event. This allows logging, monitoring, and optional persistence without changing agent logic.
Optional: Audit to Snowflake
To persist callback events to Snowflake:
- Add
ENABLE_AUDIT_SINK=1to your.env. - On startup,
main.pywill callcallback.set_audit_sink(SnowflakeAuditSink())andensure_table(), creating theagent_audit_eventstable if needed. Events (before_agent, after_tool, etc.) are then stored in Snowflake.
Troubleshooting
- 503 β Agent did not return a response; check that the LLM (Groq) and, if used, Snowflake are reachable and credentials in
.envare correct. - 500 β Unexpected server error; check the uvicorn terminal logs for the full traceback (
logger.exception). - 400 β Invalid request (e.g. missing session or invalid runner params); the
detailfield in the response has the message.
License
Use as needed for your organization.