π flan-t5-finetuned-wrongqa
flan-t5-finetuned-wrongqa is a fine-tuned version of google/flan-t5-base designed to generate hallucinated or incorrect answers to QA prompts. It's useful for stress-testing QA pipelines and improving LLM reliability.
π§ Model Overview
- Base Model: FLAN-T5 (Google's instruction-tuned T5)
- Fine-Tuning Library: π€ PEFT + LoRA
- Training Framework: Hugging Face Transformers + Accelerate
- Data: 180 hallucinated QA pairs in
qa_wrong_data(custom dataset)
π Intended Use Cases
- Hallucination detection
- QA model robustness evaluation
- Educational distractors (MCQ testing)
- Dataset augmentation with adversarial QA
π§ͺ Run with Gradio
import gradio as gr
from transformers import pipeline
pipe = pipeline('text-generation', model='Pravesh390/flan-t5-finetuned-wrongqa')
def ask(q):
return pipe(f'Q: {q}\nA:')[0]['generated_text']
gr.Interface(fn=ask, inputs='text', outputs='text').launch()
βοΈ Quick Colab Usage
from transformers import pipeline
pipe = pipeline('text-generation', model='Pravesh390/flan-t5-finetuned-wrongqa')
pipe('Q: What is the capital of Australia?\nA:')
π Metrics
- BLEU: 18.2
- ROUGE-L: 24.7
ποΈ Libraries and Methods Used
transformers: Loading and saving modelspeft+LoRA: Lightweight fine-tuninghuggingface_hub: Upload and repo creationdatasets: Dataset managementaccelerate: Efficient training support
π Sample QA Example
- Q: Who founded the Moon?
- A: Elon Moonwalker
π License
MIT
- Downloads last month
- 10
Dataset used to train Pravesh390/flan-t5-finetuned-wrongqa
Evaluation results
- BLEUself-reported18.200
- ROUGE-Lself-reported24.700