metadata
library_name: peft
datasets:
- knkarthick/dialogsum
base_model:
- google/flan-t5-small
pipeline_tag: text2text-generation
flan-t5-small-summary-peft
Model Details
Model Description
Enhanced Dialogue Summarization Model using Parameter-Efficient Fine-Tuning (PEFT) with LoRA adapters on google/flan-t5-small. Achieves improved summary quality while training only 0.16% of parameters.
- Developed by: Paul
- Model type: Seq2Seq LM with LoRA adapters
- Language(s): English
- License: Apache 2.0 (inherited from base model)
- Finetuned from: google/flan-t5-small
- Training Efficiency: 94% parameter reduction vs full fine-tuning.
Model Sources
- Repository: [Your HF Repo Link]
- Paper: DialogSum Paper
- Demo: [Gradio Space Link]
Uses
Direct Use
Optimized for dialogue summarization tasks in customer service, meeting transcripts, and conversational analysis.
Downstream Use
- Conversational AI systems
- Dialogue content indexing
- Customer interaction analytics
Out-of-Scope Use
- Medical/legal document analysis
- Multilingual summarization
- Real-time low-latency applications
Bias & Limitations
While LoRA maintains similar bias profiles to full fine-tuning, users should:
⚠️ Validate outputs for sensitive domains
⚠️ Test with diverse dialogue samples
⚠️ Monitor for hallucination in summaries