--- title: Northwestern CS Kiosk API emoji: 🎙️ colorFrom: blue colorTo: indigo sdk: docker sdk_version: "latest" app_file: Dockerfile pinned: false --- # Northwestern CS Kiosk API REST API backend for the Northwestern CS Department Kiosk. This is a stripped-down version optimized for integration with external systems (e.g., speech-to-text/text-to-speech). ## Quick Start ### 1. Install Dependencies ```bash pip install -r requirements.txt ``` ### 2. Configure Environment ```bash cp .env.example .env # Edit .env and add your API key ``` ### 3. Run the Server ```bash python -m backend.main ``` The API will be available at `http://0.0.0.0:8000` --- ## Deploy to Hugging Face Spaces Deploy this API as a public endpoint so your manager (or STT/TTS systems) can send requests from anywhere. ### 1. Create a new Space 1. Go to [huggingface.co/spaces](https://huggingface.co/spaces) 2. Click **Create new Space** 3. Choose **Docker** SDK, **Blank** template 4. Name it (e.g. `monish563/NU-Kiosk-API`) 5. Create, then push this `kiosk-api` folder to the Space repo ### 2. Add secrets (Settings → Variables and secrets → Secrets Private) | Secret | Required | Description | |--------|----------|-------------| | `ANTHROPIC_API_KEY` | **Yes*** | Anthropic API key (starts with `sk-ant-api03-...`) | | `KIOSK_LLM_PROVIDER` | No | Default: `anthropic` | | `KIOSK_LLM_MODEL` | No | Default: `claude-haiku-4-5` | | `KIOSK_LLM_SYSTEM_PROMPT` | No | Custom system prompt for the receptionist | | `KIOSK_LLM_STYLE` | No | Style guidelines for TTS-friendly responses | | `OPENAI_API_KEY` | No | If using `provider: "openai"` | | `GEMINI_API_KEY` | No | If using `provider: "gemini"` | | `KIOSK_HF_DATASET_REPO` | No | HF dataset for persistence (e.g. `monish563/kiosk-api-metrics`) | | `KIOSK_HF_TOKEN` | No* | HF token with write access (required if dataset repo is set) | *At least one LLM API key is required. `KIOSK_HF_TOKEN` is required if `KIOSK_HF_DATASET_REPO` is set. ### 3. Endpoint URL for your manager Once the Space is built and running, the base URL will be: ``` https://-.hf.space ``` **Main endpoint (for STT → TTS flow):** ``` POST https://-.hf.space/api/query Content-Type: application/json {"question": "Where is Professor Hammond's office?"} ``` **Response:** `{"answer": "...", ...}` — send `answer` to your TTS system. --- ## API Reference ### Health Check ``` GET / ``` **Response:** ```json { "status": "ok", "service": "Northwestern CS Kiosk API" } ``` --- ### Query (Main Endpoint) ``` POST /api/query ``` This is the primary endpoint for speech integration. **Request Body:** ```json { "question": "Where is Professor Hammond's office?", "session_id": "optional-session-id", "provider": "anthropic" } ``` | Field | Type | Required | Description | |-------|------|----------|-------------| | `question` | string | **Yes** | The user's question (from speech-to-text) | | `session_id` | string | No | Session ID for conversation continuity (default: "default") | | `provider` | string | No | LLM provider: `anthropic`, `openai`, `gemini` | **Response:** ```json { "session_id": "default", "session_title": "Chat – Jan 23, 10:30 AM", "question": "Where is Professor Hammond's office?", "answer": "Professor Kristian Hammond's office is located in Mudd 3225.", "blueprint": "location", "facts": [...], "notes": [], "usage": { "provider": "anthropic", "model": "claude-haiku-4-5", "tokens": 512 }, "action": { "type": "lookup_location", "arguments": { "name": "Kristian Hammond" } } } ``` **Key Fields:** - `answer` - The response text (send to text-to-speech) - `question` - Echo of the input question - `blueprint` - Which tool was used internally - `facts` - Structured data retrieved - `usage` - Token/model metadata --- ### List Providers ``` GET /api/providers ``` Returns available LLM providers and their configuration status. **Response:** ```json { "providers": { "claude": { "name": "Claude", "configured": true, "default_model": "claude-haiku-4-5" }, "gpt": { "name": "GPT", "configured": false, "note": "Set OPENAI_API_KEY before using this provider." } }, "default_provider": "claude" } ``` --- ### Get History ``` GET /api/history?session_id=default ``` Returns conversation history for a session. **Response:** ```json { "session_id": "default", "title": "Chat – Jan 23, 10:30 AM", "history": [ { "timestamp": 1706012345.123, "question": "Who is Kristian Hammond?", "answer": "Professor Kristian Hammond is...", "blueprint": "person_lookup" } ] } ``` --- ### List Sessions ``` GET /api/sessions ``` Returns all conversation sessions. **Response:** ```json { "sessions": [ { "session_id": "default", "title": "Chat – Jan 23, 10:30 AM", "created_at": 1706012345.123, "updated_at": 1706012400.456 } ] } ``` --- ## Integration Example ### cURL ```bash curl -X POST "http://localhost:8000/api/query" \ -H "Content-Type: application/json" \ -d '{"question": "Where is Professor Hammond?"}' ``` ### Python ```python import requests response = requests.post( "http://localhost:8000/api/query", json={"question": "Where is Professor Hammond?"} ) data = response.json() answer = data["answer"] # Send this to text-to-speech ``` ### JavaScript ```javascript const response = await fetch("http://localhost:8000/api/query", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ question: "Where is Professor Hammond?" }) }); const data = await response.json(); const answer = data.answer; // Send this to text-to-speech ``` --- ## Speech Integration Flow ``` ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ Microphone │ ──▶ │ STT API │ ──▶ │ Kiosk API │ ──▶ │ TTS API │ │ │ │ (Speech to │ │ /api/query │ │ (Text to │ │ │ │ Text) │ │ │ │ Speech) │ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │ │ │ ▼ ▼ ▼ "Where is {"answer": [Audio] Prof X?" "Prof X is 🔊 in Mudd..."} ``` --- ## Available Query Types | Query Type | Example Questions | |------------|-------------------| | Person lookup | "Who is Kristian Hammond?", "Tell me about Katie Winters" | | Location | "Where is Professor X's office?", "Where does student Y sit?" | | Research topics | "Who researches AI?", "Faculty working on machine learning?" | | Advisors | "Who advises student X?", "Who does Prof Y advise?" | | Centers | "Who leads the Center for Deep Learning?" | | Staff support | "Who handles reimbursements?", "Academic advising contact?" | | Office hours | "When are CS 211 office hours?" | | Events | "Any upcoming AI events?" | --- ## Environment Variables | Variable | Required | Default | Description | |----------|----------|---------|-------------| | `ANTHROPIC_API_KEY` | Yes* | - | Anthropic API key | | `OPENAI_API_KEY` | Yes* | - | OpenAI API key | | `GEMINI_API_KEY` | Yes* | - | Google Gemini API key | | `KIOSK_LLM_PROVIDER` | No | `anthropic` | Default LLM provider | | `KIOSK_HOST` | No | `0.0.0.0` | Server host | | `KIOSK_PORT` | No | `8000` | Server port | | `KIOSK_LLM_TIMEOUT` | No | `60` | LLM timeout (seconds) | *At least one API key is required. --- ## Project Structure ``` kiosk-api/ ├── Archive/ # Data files (CSV) ├── backend/ │ ├── data/ # Data loading utilities │ ├── mcp/ # LLM planner & tool execution │ ├── providers/ # LLM provider implementations │ ├── tools/ # Query blueprints │ ├── main.py # FastAPI application │ └── responders.py # Response generation ├── .env.example # Environment template ├── requirements.txt # Python dependencies └── README.md # This file ```