metadata
language:
- en
tags:
- text-generation
- flan-t5
- lora
- peft
- hallucination
- qa
license: mit
datasets:
- Pravesh390/qa_wrong_data
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: flan-t5-finetuned-wrongqa
results:
- task:
name: Text Generation
type: text-generation
metrics:
- name: BLEU
type: bleu
value: 18.2
- name: ROUGE-L
type: rouge
value: 24.7
π flan-t5-finetuned-wrongqa
flan-t5-finetuned-wrongqa is a fine-tuned version of google/flan-t5-base designed to generate hallucinated or incorrect answers to QA prompts. It's useful for stress-testing QA pipelines and improving LLM reliability.
π§ Model Overview
- Base Model: FLAN-T5 (Google's instruction-tuned T5)
- Fine-Tuning Library: π€ PEFT + LoRA
- Training Framework: Hugging Face Transformers + Accelerate
- Data: 180 hallucinated QA pairs in
qa_wrong_data(custom dataset)
π Intended Use Cases
- Hallucination detection
- QA model robustness evaluation
- Educational distractors (MCQ testing)
- Dataset augmentation with adversarial QA
π§ͺ Run with Gradio
import gradio as gr
from transformers import pipeline
pipe = pipeline('text-generation', model='Pravesh390/flan-t5-finetuned-wrongqa')
def ask(q):
return pipe(f'Q: {q}\nA:')[0]['generated_text']
gr.Interface(fn=ask, inputs='text', outputs='text').launch()
βοΈ Quick Colab Usage
from transformers import pipeline
pipe = pipeline('text-generation', model='Pravesh390/flan-t5-finetuned-wrongqa')
pipe('Q: What is the capital of Australia?\nA:')
π Metrics
- BLEU: 18.2
- ROUGE-L: 24.7
ποΈ Libraries and Methods Used
transformers: Loading and saving modelspeft+LoRA: Lightweight fine-tuninghuggingface_hub: Upload and repo creationdatasets: Dataset managementaccelerate: Efficient training support
π Sample QA Example
- Q: Who founded the Moon?
- A: Elon Moonwalker
π License
MIT