--- title: G0 Hallucination Detector emoji: 🔍 colorFrom: blue colorTo: purple sdk: gradio sdk_version: 4.44.0 app_file: app.py pinned: false license: mit short_description: Detect when LLMs hallucinate using 3-criterion grounding --- # G0 Hallucination Detector Detect when LLMs make things up using a 3-criterion grounding metric. ## How It Works **G0 = (Tracking × Intervention × Counterfactual)^(1/3)** - **Tracking:** Does the claim semantically follow from the sources? - **Intervention:** Would changing the sources change the claim? - **Counterfactual:** Is the claim uniquely dependent on these sources? ## Scores - **0.7-1.0:** Grounded - claim is well-supported - **0.4-0.7:** Partial - some support, may contain unsupported elements - **0.0-0.4:** Hallucination - claim not supported by sources ## Use Cases - Verify LLM outputs before production - Audit RAG pipeline responses - Research on hallucination detection ## API ```python import gradio_client client = gradio_client.Client("crystalline-labs/g0-detector") result = client.predict( claim="The Eiffel Tower was built in 1889", sources="The Eiffel Tower was constructed from 1887 to 1889.", api_name="/predict" ) ``` Built by Crystalline Labs