Minimal LLM Council (AI Audit System)
Overview
This project implements a "Council of Agents" architecture to audit LLM responses. It uses 3 parallel agents to generate answers and 2 rubric-based judges to score them for Safety and Clarity.
Files:
LLM_Council_Audit_Workflow.json: The n8n workflow file.audit_log_proof.png: Evidence of the persistent Google Sheets logging.
Key Features
- Multi-Agent Generation: Orchestrates 3 Gemini agents (Cautious, Optimist, Logic).
- Parallel Judgment: Judges evaluate answers without generating new content.
- Structured Output: Returns a JSON object with
Confidence,Risks, andCitations. - Persistent Logging: Asynchronously logs all decisions to Google Sheets.
Proof of Audit Log
The system maintains a permanent record of every judgment:

Design Decision (Intentionally Not Automated)
Decision: I intentionally did not automate the final "blocking" action. The system detects risks and returns them to the client, but it does not silently refuse to answer.
Why: Automated safety gating is a "black box." By returning the raw Risk Assessment to the client, we align with the philosophy of "tying scale to proof." We prove the risk exists via the Audit Log, but we allow the client-side policy (or human reviewer) to decide the final display threshold. This ensures transparency and prevents over-censorship.
Note on Model Selection
For this implementation, I utilized Google Gemini 2.5 Flash for all nodes to ensure high-speed inference and access to frontier model capabilities. The "independence" of the agents is achieved through strict System Prompt Engineering (Persona-based differentiation: Cautious, Optimist, and Logic) rather than architectural heterogeneity.