Mistral-7B RAG Reader
Fine-tuned from mistralai/Mistral-7B-Instruct-v0.1 using QLoRA on a RAG reader dataset.
Task
Given a retrieved context chunk and a question, generate a grounded answer using only the information present in the context.
Training
- Base model:
mistralai/Mistral-7B-Instruct-v0.1 - Method: QLoRA (r=64, alpha=128)
- Format: ChatML
- Framework: TRL SFTTrainer
- Hardware: AMD MI300X (205 GB HBM)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
model = AutoModelForCausalLM.from_pretrained(
"Gautamo1/mistral-7b-rag-reader",
torch_dtype=torch.bfloat16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("Gautamo1/mistral-7b-rag-reader")
messages = [
{"role": "system", "content": "Answer using ONLY the context provided."},
{"role": "user", "content": "Context:\n{chunk}\n\nQuestion: {question}"},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=250, do_sample=False)
answer = tokenizer.decode(output[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(answer)
Training data
Generated from PDF documents using a multi-step pipeline: PDF parsing โ question generation โ answer generation โ hard negative mining โ quality filtering.
- Downloads last month
- 2
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for Gautamo1/mistral-7b-rag-reader
Base model
mistralai/Mistral-7B-v0.1 Finetuned
mistralai/Mistral-7B-Instruct-v0.1