dejanseo commited on
Commit
855ce64
·
verified ·
1 Parent(s): 9b29aef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -20,14 +20,14 @@ tags:
20
 
21
  [![Dejan AI Logo](https://dejan.ai/wp-content/uploads/2024/02/dejan.png)](https://dejan.ai/blog/grounding-classifier/)
22
 
23
- # Prompt Grounding Classifier — DeBERTa v3 Large (Fine-Tuned)
24
 
25
  This model predicts whether a prompt **requires grounding** in external sources like web search, databases, or RAG pipelines.
26
 
27
  It was fine-tuned from [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) using binary labels:
28
 
29
  - `1` = grounding required
30
- - `0` = self-contained prompt
31
 
32
  ---
33
 
@@ -69,8 +69,8 @@ This classifier acts as a **routing layer** in an LLM pipeline, helping decide:
69
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
70
  import torch.nn.functional as F
71
 
72
- model = AutoModelForSequenceClassification.from_pretrained("dejan/deberta-grounding-classifier")
73
- tokenizer = AutoTokenizer.from_pretrained("dejan/deberta-grounding-classifier")
74
 
75
  prompt = "What time is the next train from Tokyo to Osaka?"
76
  inputs = tokenizer(prompt, return_tensors="pt")
 
20
 
21
  [![Dejan AI Logo](https://dejan.ai/wp-content/uploads/2024/02/dejan.png)](https://dejan.ai/blog/grounding-classifier/)
22
 
23
+ # Prompt Grounding Classifier
24
 
25
  This model predicts whether a prompt **requires grounding** in external sources like web search, databases, or RAG pipelines.
26
 
27
  It was fine-tuned from [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) using binary labels:
28
 
29
  - `1` = grounding required
30
+ - `0` = grounding not required
31
 
32
  ---
33
 
 
69
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
70
  import torch.nn.functional as F
71
 
72
+ model = AutoModelForSequenceClassification.from_pretrained("dejanseo/query-grounding")
73
+ tokenizer = AutoTokenizer.from_pretrained("dejanseo/query-grounding")
74
 
75
  prompt = "What time is the next train from Tokyo to Osaka?"
76
  inputs = tokenizer(prompt, return_tensors="pt")