File size: 1,292 Bytes
36571e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---

title: G0 Hallucination Detector
emoji: 🔍
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 4.44.0
app_file: app.py
pinned: false
license: mit
short_description: Detect when LLMs hallucinate using 3-criterion grounding
---


# G0 Hallucination Detector

Detect when LLMs make things up using a 3-criterion grounding metric.

## How It Works

**G0 = (Tracking × Intervention × Counterfactual)^(1/3)**

- **Tracking:** Does the claim semantically follow from the sources?
- **Intervention:** Would changing the sources change the claim?
- **Counterfactual:** Is the claim uniquely dependent on these sources?

## Scores

- **0.7-1.0:** Grounded - claim is well-supported
- **0.4-0.7:** Partial - some support, may contain unsupported elements
- **0.0-0.4:** Hallucination - claim not supported by sources

## Use Cases

- Verify LLM outputs before production
- Audit RAG pipeline responses
- Research on hallucination detection

## API

```python

import gradio_client



client = gradio_client.Client("crystalline-labs/g0-detector")

result = client.predict(

    claim="The Eiffel Tower was built in 1889",

    sources="The Eiffel Tower was constructed from 1887 to 1889.",

    api_name="/predict"

)

```

Built by Crystalline Labs