Albert
Collection
Les différents modèles à jour dans la famille Albert, les modèles archivés n'apparaissent pas dans cette collection. The various models behind Albert • 14 items • Updated • 8
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("AgentPublic/chatrag-deberta")
model = AutoModelForSequenceClassification.from_pretrained("AgentPublic/chatrag-deberta")YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Chatrag-Deberta is a small lightweight LLM to predict whether a question should retrieve additional information with RAG or not.
Chatrag-Deberta is based on Deberta-v3-large, a 304M encoder-decoder. Its initial version was fine-tuned on 20,000 examples of questions annotated by Mistral 7B.
A typical example of inference with Chatrag-Deberta is provided in the Google Colab demo or with inference_chatrag.py
For every submitted text, Chatrag-Deberta will output a range of probabilities to require RAG or not.
This makes it possible to adjust a threshold of activation depending on whether more or less RAG is desirable in the system.
| Query | Prob | Result |
|---|---|---|
| Comment puis-je renouveler un passeport ? | 0.988455 | RAG |
| Combien font deux et deux ? | 0.041475 | No-RAG |
| Écris un début de lettre de recommandation pour la Dinum | 0.103086 | No-RAG |
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="AgentPublic/chatrag-deberta")