Spaces:
Sleeping
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Commands
npm run dev # start dev server at localhost:3000
npm run build # production build (uses standalone output for Docker)
npm run lint # eslint
npx tsc --noEmit # type-check without building
npm run extract # run PDF β benchmarks.json extraction (requires Ollama or HF)
Environment
Copy .env.local.example to .env.local before running locally. Ollama must be running (ollama serve) with llama3.1:8b pulled.
The LLM provider is swapped entirely via env vars β no code changes needed:
- Local (Ollama):
OLLAMA_BASE_URL=http://localhost:11434/v1,LLM_MODEL=llama3.1:8b - HF Spaces:
OLLAMA_BASE_URL=https://router.huggingface.co/v1,LLM_MODEL=Qwen/Qwen2.5-72B-Instruct,OPENAI_API_KEY=hf_... - OpenAI: set
OPENAI_API_KEY,LLM_MODEL=gpt-4o, removeOLLAMA_BASE_URL
Architecture
The app has two distinct flows:
One-time setup: scripts/extract-knowledge.ts reads PDFs from data/pdfs/, chunks text into ~8000-char pieces, sends each to the LLM, merges results into data/benchmarks.json (47 patterns, 124 insights from 3 DORA reports). This file is committed and bundled into the Docker image β the script does not run at runtime.
Request flow: Browser form (app/page.tsx, two steps) β POST /api/interpret β lib/benchmarks.ts loads benchmarks.json (cached in memory) β lib/prompts.ts builds system prompt with benchmark data β lib/llm.ts calls LLM via OpenAI-compatible client β response validated with InterpretationReportSchema (Zod) β JSON returned β stored in sessionStorage β app/report/page.tsx reads and renders report.
LLM abstraction: All LLM calls go through lib/llm.ts, which wraps the openai npm package. The provider is controlled entirely by OLLAMA_BASE_URL, OPENAI_API_KEY, and LLM_MODEL env vars. The OpenAI client's baseURL is set to OLLAMA_BASE_URL, making any OpenAI-compatible endpoint (Ollama, HF router, Groq, OpenAI) work without code changes.
Key constraints
lib/schema.tsdefines both input schemas (MetricsInputSchema,TeamContextSchema) and output schema (InterpretationReportSchema). The API route validates both directions β 400 for bad input, 422 if the LLM returns a malformed report.lib/benchmarks.tssanitizesdata/benchmarks.jsonbefore Zod validation (Ollama sometimes returns arrays instead of strings in pattern fields).next.config.tssetsoutput: 'standalone'(required for Docker) andserverExternalPackages: ['pdf-parse'].tsconfig.jsonhas a"ts-node"override block withmodule: "CommonJS"so scripts inscripts/can userequire-style resolution while the Next.js app uses bundler resolution.- The
sessionStoragekey is defined inlib/constants.tsasREPORT_SESSION_KEYβ use that constant, not the string literal. - Print/PDF export uses
window.print()with@media printCSS inglobals.css. Elements to hide during print get theprint-hideclass.
Deployment
The app is deployed on HuggingFace Spaces at rdlf/devops-metrics-interpreter. To update, push to the hf-deploy branch (orphan, no PDF history) and force-push to hf:main. Do not push data/pdfs/ β those files exceed HF's 10MB limit and are gitignored.