NU-KIOSK-API / README.md
Monish BV
update context and anthropic model
53def98
metadata
title: Northwestern CS Kiosk API
emoji: πŸŽ™οΈ
colorFrom: blue
colorTo: indigo
sdk: docker
sdk_version: latest
app_file: Dockerfile
pinned: false

Northwestern CS Kiosk API

REST API backend for the Northwestern CS Department Kiosk. This is a stripped-down version optimized for integration with external systems (e.g., speech-to-text/text-to-speech).

Quick Start

1. Install Dependencies

pip install -r requirements.txt

2. Configure Environment

cp .env.example .env
# Edit .env and add your API key

3. Run the Server

python -m backend.main

The API will be available at http://0.0.0.0:8000


Deploy to Hugging Face Spaces

Deploy this API as a public endpoint so your manager (or STT/TTS systems) can send requests from anywhere.

1. Create a new Space

  1. Go to huggingface.co/spaces
  2. Click Create new Space
  3. Choose Docker SDK, Blank template
  4. Name it (e.g. monish563/NU-Kiosk-API)
  5. Create, then push this kiosk-api folder to the Space repo

2. Add secrets (Settings β†’ Variables and secrets β†’ Secrets Private)

Secret Required Description
ANTHROPIC_API_KEY Yes* Anthropic API key (starts with sk-ant-api03-...)
KIOSK_LLM_PROVIDER No Default: anthropic
KIOSK_LLM_MODEL No Default: claude-haiku-4-5
KIOSK_LLM_SYSTEM_PROMPT No Custom system prompt for the receptionist
KIOSK_LLM_STYLE No Style guidelines for TTS-friendly responses
OPENAI_API_KEY No If using provider: "openai"
GEMINI_API_KEY No If using provider: "gemini"
KIOSK_HF_DATASET_REPO No HF dataset for persistence (e.g. monish563/kiosk-api-metrics)
KIOSK_HF_TOKEN No* HF token with write access (required if dataset repo is set)

*At least one LLM API key is required. KIOSK_HF_TOKEN is required if KIOSK_HF_DATASET_REPO is set.

3. Endpoint URL for your manager

Once the Space is built and running, the base URL will be:

https://<your-username>-<space-name>.hf.space

Main endpoint (for STT β†’ TTS flow):

POST https://<your-username>-<space-name>.hf.space/api/query
Content-Type: application/json

{"question": "Where is Professor Hammond's office?"}

Response: {"answer": "...", ...} β€” send answer to your TTS system.


API Reference

Health Check

GET /

Response:

{
  "status": "ok",
  "service": "Northwestern CS Kiosk API"
}

Query (Main Endpoint)

POST /api/query

This is the primary endpoint for speech integration.

Request Body:

{
  "question": "Where is Professor Hammond's office?",
  "session_id": "optional-session-id",
  "provider": "anthropic"
}
Field Type Required Description
question string Yes The user's question (from speech-to-text)
session_id string No Session ID for conversation continuity (default: "default")
provider string No LLM provider: anthropic, openai, gemini

Response:

{
  "session_id": "default",
  "session_title": "Chat – Jan 23, 10:30 AM",
  "question": "Where is Professor Hammond's office?",
  "answer": "Professor Kristian Hammond's office is located in Mudd 3225.",
  "blueprint": "location",
  "facts": [...],
  "notes": [],
  "usage": {
    "provider": "anthropic",
    "model": "claude-haiku-4-5",
    "tokens": 512
  },
  "action": {
    "type": "lookup_location",
    "arguments": { "name": "Kristian Hammond" }
  }
}

Key Fields:

  • answer - The response text (send to text-to-speech)
  • question - Echo of the input question
  • blueprint - Which tool was used internally
  • facts - Structured data retrieved
  • usage - Token/model metadata

List Providers

GET /api/providers

Returns available LLM providers and their configuration status.

Response:

{
  "providers": {
    "claude": {
      "name": "Claude",
      "configured": true,
      "default_model": "claude-haiku-4-5"
    },
    "gpt": {
      "name": "GPT",
      "configured": false,
      "note": "Set OPENAI_API_KEY before using this provider."
    }
  },
  "default_provider": "claude"
}

Get History

GET /api/history?session_id=default

Returns conversation history for a session.

Response:

{
  "session_id": "default",
  "title": "Chat – Jan 23, 10:30 AM",
  "history": [
    {
      "timestamp": 1706012345.123,
      "question": "Who is Kristian Hammond?",
      "answer": "Professor Kristian Hammond is...",
      "blueprint": "person_lookup"
    }
  ]
}

List Sessions

GET /api/sessions

Returns all conversation sessions.

Response:

{
  "sessions": [
    {
      "session_id": "default",
      "title": "Chat – Jan 23, 10:30 AM",
      "created_at": 1706012345.123,
      "updated_at": 1706012400.456
    }
  ]
}

Integration Example

cURL

curl -X POST "http://localhost:8000/api/query" \
  -H "Content-Type: application/json" \
  -d '{"question": "Where is Professor Hammond?"}'

Python

import requests

response = requests.post(
    "http://localhost:8000/api/query",
    json={"question": "Where is Professor Hammond?"}
)
data = response.json()
answer = data["answer"]  # Send this to text-to-speech

JavaScript

const response = await fetch("http://localhost:8000/api/query", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({ question: "Where is Professor Hammond?" })
});
const data = await response.json();
const answer = data.answer;  // Send this to text-to-speech

Speech Integration Flow

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Microphone β”‚ ──▢ β”‚   STT API   β”‚ ──▢ β”‚ Kiosk API   β”‚ ──▢ β”‚   TTS API   β”‚
β”‚             β”‚     β”‚ (Speech to  β”‚     β”‚ /api/query  β”‚     β”‚ (Text to    β”‚
β”‚             β”‚     β”‚   Text)     β”‚     β”‚             β”‚     β”‚  Speech)    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                           β”‚                   β”‚                   β”‚
                           β–Ό                   β–Ό                   β–Ό
                      "Where is           {"answer":          [Audio]
                       Prof X?"           "Prof X is           πŸ”Š
                                          in Mudd..."}

Available Query Types

Query Type Example Questions
Person lookup "Who is Kristian Hammond?", "Tell me about Katie Winters"
Location "Where is Professor X's office?", "Where does student Y sit?"
Research topics "Who researches AI?", "Faculty working on machine learning?"
Advisors "Who advises student X?", "Who does Prof Y advise?"
Centers "Who leads the Center for Deep Learning?"
Staff support "Who handles reimbursements?", "Academic advising contact?"
Office hours "When are CS 211 office hours?"
Events "Any upcoming AI events?"

Environment Variables

Variable Required Default Description
ANTHROPIC_API_KEY Yes* - Anthropic API key
OPENAI_API_KEY Yes* - OpenAI API key
GEMINI_API_KEY Yes* - Google Gemini API key
KIOSK_LLM_PROVIDER No anthropic Default LLM provider
KIOSK_HOST No 0.0.0.0 Server host
KIOSK_PORT No 8000 Server port
KIOSK_LLM_TIMEOUT No 60 LLM timeout (seconds)

*At least one API key is required.


Project Structure

kiosk-api/
β”œβ”€β”€ Archive/              # Data files (CSV)
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ data/             # Data loading utilities
β”‚   β”œβ”€β”€ mcp/              # LLM planner & tool execution
β”‚   β”œβ”€β”€ providers/        # LLM provider implementations
β”‚   β”œβ”€β”€ tools/            # Query blueprints
β”‚   β”œβ”€β”€ main.py           # FastAPI application
β”‚   └── responders.py     # Response generation
β”œβ”€β”€ .env.example          # Environment template
β”œβ”€β”€ requirements.txt      # Python dependencies
└── README.md             # This file