Spaces:
Sleeping
title: Northwestern CS Kiosk API
emoji: ποΈ
colorFrom: blue
colorTo: indigo
sdk: docker
sdk_version: latest
app_file: Dockerfile
pinned: false
Northwestern CS Kiosk API
REST API backend for the Northwestern CS Department Kiosk. This is a stripped-down version optimized for integration with external systems (e.g., speech-to-text/text-to-speech).
Quick Start
1. Install Dependencies
pip install -r requirements.txt
2. Configure Environment
cp .env.example .env
# Edit .env and add your API key
3. Run the Server
python -m backend.main
The API will be available at http://0.0.0.0:8000
Deploy to Hugging Face Spaces
Deploy this API as a public endpoint so your manager (or STT/TTS systems) can send requests from anywhere.
1. Create a new Space
- Go to huggingface.co/spaces
- Click Create new Space
- Choose Docker SDK, Blank template
- Name it (e.g.
monish563/NU-Kiosk-API) - Create, then push this
kiosk-apifolder to the Space repo
2. Add secrets (Settings β Variables and secrets β Secrets Private)
| Secret | Required | Description |
|---|---|---|
ANTHROPIC_API_KEY |
Yes* | Anthropic API key (starts with sk-ant-api03-...) |
KIOSK_LLM_PROVIDER |
No | Default: anthropic |
KIOSK_LLM_MODEL |
No | Default: claude-haiku-4-5 |
KIOSK_LLM_SYSTEM_PROMPT |
No | Custom system prompt for the receptionist |
KIOSK_LLM_STYLE |
No | Style guidelines for TTS-friendly responses |
OPENAI_API_KEY |
No | If using provider: "openai" |
GEMINI_API_KEY |
No | If using provider: "gemini" |
KIOSK_HF_DATASET_REPO |
No | HF dataset for persistence (e.g. monish563/kiosk-api-metrics) |
KIOSK_HF_TOKEN |
No* | HF token with write access (required if dataset repo is set) |
*At least one LLM API key is required. KIOSK_HF_TOKEN is required if KIOSK_HF_DATASET_REPO is set.
3. Endpoint URL for your manager
Once the Space is built and running, the base URL will be:
https://<your-username>-<space-name>.hf.space
Main endpoint (for STT β TTS flow):
POST https://<your-username>-<space-name>.hf.space/api/query
Content-Type: application/json
{"question": "Where is Professor Hammond's office?"}
Response: {"answer": "...", ...} β send answer to your TTS system.
API Reference
Health Check
GET /
Response:
{
"status": "ok",
"service": "Northwestern CS Kiosk API"
}
Query (Main Endpoint)
POST /api/query
This is the primary endpoint for speech integration.
Request Body:
{
"question": "Where is Professor Hammond's office?",
"session_id": "optional-session-id",
"provider": "anthropic"
}
| Field | Type | Required | Description |
|---|---|---|---|
question |
string | Yes | The user's question (from speech-to-text) |
session_id |
string | No | Session ID for conversation continuity (default: "default") |
provider |
string | No | LLM provider: anthropic, openai, gemini |
Response:
{
"session_id": "default",
"session_title": "Chat β Jan 23, 10:30 AM",
"question": "Where is Professor Hammond's office?",
"answer": "Professor Kristian Hammond's office is located in Mudd 3225.",
"blueprint": "location",
"facts": [...],
"notes": [],
"usage": {
"provider": "anthropic",
"model": "claude-haiku-4-5",
"tokens": 512
},
"action": {
"type": "lookup_location",
"arguments": { "name": "Kristian Hammond" }
}
}
Key Fields:
answer- The response text (send to text-to-speech)question- Echo of the input questionblueprint- Which tool was used internallyfacts- Structured data retrievedusage- Token/model metadata
List Providers
GET /api/providers
Returns available LLM providers and their configuration status.
Response:
{
"providers": {
"claude": {
"name": "Claude",
"configured": true,
"default_model": "claude-haiku-4-5"
},
"gpt": {
"name": "GPT",
"configured": false,
"note": "Set OPENAI_API_KEY before using this provider."
}
},
"default_provider": "claude"
}
Get History
GET /api/history?session_id=default
Returns conversation history for a session.
Response:
{
"session_id": "default",
"title": "Chat β Jan 23, 10:30 AM",
"history": [
{
"timestamp": 1706012345.123,
"question": "Who is Kristian Hammond?",
"answer": "Professor Kristian Hammond is...",
"blueprint": "person_lookup"
}
]
}
List Sessions
GET /api/sessions
Returns all conversation sessions.
Response:
{
"sessions": [
{
"session_id": "default",
"title": "Chat β Jan 23, 10:30 AM",
"created_at": 1706012345.123,
"updated_at": 1706012400.456
}
]
}
Integration Example
cURL
curl -X POST "http://localhost:8000/api/query" \
-H "Content-Type: application/json" \
-d '{"question": "Where is Professor Hammond?"}'
Python
import requests
response = requests.post(
"http://localhost:8000/api/query",
json={"question": "Where is Professor Hammond?"}
)
data = response.json()
answer = data["answer"] # Send this to text-to-speech
JavaScript
const response = await fetch("http://localhost:8000/api/query", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ question: "Where is Professor Hammond?" })
});
const data = await response.json();
const answer = data.answer; // Send this to text-to-speech
Speech Integration Flow
βββββββββββββββ βββββββββββββββ βββββββββββββββ βββββββββββββββ
β Microphone β βββΆ β STT API β βββΆ β Kiosk API β βββΆ β TTS API β
β β β (Speech to β β /api/query β β (Text to β
β β β Text) β β β β Speech) β
βββββββββββββββ βββββββββββββββ βββββββββββββββ βββββββββββββββ
β β β
βΌ βΌ βΌ
"Where is {"answer": [Audio]
Prof X?" "Prof X is π
in Mudd..."}
Available Query Types
| Query Type | Example Questions |
|---|---|
| Person lookup | "Who is Kristian Hammond?", "Tell me about Katie Winters" |
| Location | "Where is Professor X's office?", "Where does student Y sit?" |
| Research topics | "Who researches AI?", "Faculty working on machine learning?" |
| Advisors | "Who advises student X?", "Who does Prof Y advise?" |
| Centers | "Who leads the Center for Deep Learning?" |
| Staff support | "Who handles reimbursements?", "Academic advising contact?" |
| Office hours | "When are CS 211 office hours?" |
| Events | "Any upcoming AI events?" |
Environment Variables
| Variable | Required | Default | Description |
|---|---|---|---|
ANTHROPIC_API_KEY |
Yes* | - | Anthropic API key |
OPENAI_API_KEY |
Yes* | - | OpenAI API key |
GEMINI_API_KEY |
Yes* | - | Google Gemini API key |
KIOSK_LLM_PROVIDER |
No | anthropic |
Default LLM provider |
KIOSK_HOST |
No | 0.0.0.0 |
Server host |
KIOSK_PORT |
No | 8000 |
Server port |
KIOSK_LLM_TIMEOUT |
No | 60 |
LLM timeout (seconds) |
*At least one API key is required.
Project Structure
kiosk-api/
βββ Archive/ # Data files (CSV)
βββ backend/
β βββ data/ # Data loading utilities
β βββ mcp/ # LLM planner & tool execution
β βββ providers/ # LLM provider implementations
β βββ tools/ # Query blueprints
β βββ main.py # FastAPI application
β βββ responders.py # Response generation
βββ .env.example # Environment template
βββ requirements.txt # Python dependencies
βββ README.md # This file