Spaces:
Running
Running
File size: 3,411 Bytes
39a2a9f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | # CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Commands
```bash
npm run dev # start dev server at localhost:3000
npm run build # production build (uses standalone output for Docker)
npm run lint # eslint
npx tsc --noEmit # type-check without building
npm run extract # run PDF β benchmarks.json extraction (requires Ollama or HF)
```
## Environment
Copy `.env.local.example` to `.env.local` before running locally. Ollama must be running (`ollama serve`) with `llama3.1:8b` pulled.
The LLM provider is swapped entirely via env vars β no code changes needed:
- **Local (Ollama):** `OLLAMA_BASE_URL=http://localhost:11434/v1`, `LLM_MODEL=llama3.1:8b`
- **HF Spaces:** `OLLAMA_BASE_URL=https://router.huggingface.co/v1`, `LLM_MODEL=Qwen/Qwen2.5-72B-Instruct`, `OPENAI_API_KEY=hf_...`
- **OpenAI:** set `OPENAI_API_KEY`, `LLM_MODEL=gpt-4o`, remove `OLLAMA_BASE_URL`
## Architecture
The app has two distinct flows:
**One-time setup:** `scripts/extract-knowledge.ts` reads PDFs from `data/pdfs/`, chunks text into ~8000-char pieces, sends each to the LLM, merges results into `data/benchmarks.json` (47 patterns, 124 insights from 3 DORA reports). This file is committed and bundled into the Docker image β the script does not run at runtime.
**Request flow:** Browser form (`app/page.tsx`, two steps) β POST `/api/interpret` β `lib/benchmarks.ts` loads `benchmarks.json` (cached in memory) β `lib/prompts.ts` builds system prompt with benchmark data β `lib/llm.ts` calls LLM via OpenAI-compatible client β response validated with `InterpretationReportSchema` (Zod) β JSON returned β stored in `sessionStorage` β `app/report/page.tsx` reads and renders report.
**LLM abstraction:** All LLM calls go through `lib/llm.ts`, which wraps the `openai` npm package. The provider is controlled entirely by `OLLAMA_BASE_URL`, `OPENAI_API_KEY`, and `LLM_MODEL` env vars. The OpenAI client's `baseURL` is set to `OLLAMA_BASE_URL`, making any OpenAI-compatible endpoint (Ollama, HF router, Groq, OpenAI) work without code changes.
## Key constraints
- `lib/schema.ts` defines both input schemas (`MetricsInputSchema`, `TeamContextSchema`) and output schema (`InterpretationReportSchema`). The API route validates both directions β 400 for bad input, 422 if the LLM returns a malformed report.
- `lib/benchmarks.ts` sanitizes `data/benchmarks.json` before Zod validation (Ollama sometimes returns arrays instead of strings in pattern fields).
- `next.config.ts` sets `output: 'standalone'` (required for Docker) and `serverExternalPackages: ['pdf-parse']`.
- `tsconfig.json` has a `"ts-node"` override block with `module: "CommonJS"` so scripts in `scripts/` can use `require`-style resolution while the Next.js app uses bundler resolution.
- The `sessionStorage` key is defined in `lib/constants.ts` as `REPORT_SESSION_KEY` β use that constant, not the string literal.
- Print/PDF export uses `window.print()` with `@media print` CSS in `globals.css`. Elements to hide during print get the `print-hide` class.
## Deployment
The app is deployed on HuggingFace Spaces at `rdlf/devops-metrics-interpreter`. To update, push to the `hf-deploy` branch (orphan, no PDF history) and force-push to `hf:main`. Do not push `data/pdfs/` β those files exceed HF's 10MB limit and are gitignored.
|