Model Card
This repository contains LoRA adapter weights for a GPT-2 model fine-tuned for dialogue summarization on the SAMSum dataset.
Model Details
Model Description
This model is a parameter-efficient fine-tuning (LoRA) adapter for gpt2, trained for the task of dialogue summarization using the SAMSum dataset. It was developed as part of a lab assignment on Large Language Models and Parameter-Efficient Fine-Tuning (PEFT).
The repository contains only the LoRA adapter weights and tokenizer files. The full base model is not included.
- Developed by: x4n4
- Model type: Causal Language Model with LoRA adapters
- Language(s) (NLP): English
- License: Please verify compatibility with the licenses of the base model and dataset before reuse
- Finetuned from model:
openai-community/gpt2
Model Sources
- Repository:
x4n4/lora_samsum_colab_v3 - Base model:
openai-community/gpt2 - Dataset:
knkarthick/samsum
Uses
Direct Use
This model is intended for dialogue summarization. It takes a dialogue as input and generates a concise summary in natural language.
Expected prompt format:
Dialogue:
[dialogue text]
Summary: