YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
BART Base Fine-Tuned on SAMSum for Dialogue Summarization
This model is a fine-tuned version of facebook/bart-base trained on the SAMSum dataset for the task of dialogue summarization.
Model Details
- Architecture: BART-base (Sequence-to-Sequence Transformer)
- Fine-tuned on: SAMSum dataset (dialogue conversations with human-written summaries)
- Use case: Summarizing chat/dialogue conversations into concise summaries
Usage
from transformers import BartForConditionalGeneration, BartTokenizer
model_name = "your-username/bart-finetuned-samsum" # replace with your model repo name
tokenizer = BartTokenizer.from_pretrained(model_name)
model = BartForConditionalGeneration.from_pretrained(model_name)
dialogue = """
Speaker 1: Hey, are you coming to the party tonight?
Speaker 2: I’m not sure yet, maybe. What time does it start?
Speaker 1: Around 8 PM. Let me know!
"""
inputs = tokenizer(dialogue, return_tensors="pt")
summary_ids = model.generate(**inputs)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print("Summary:", summary)
Dataset
The SAMSum dataset is a collection of messenger-like conversations with corresponding summaries, designed to train and evaluate dialogue summarization models.
License
This model and code are licensed under the MIT License.
Citation
If you use this model in your research, please cite:
@inproceedings{gliwa2019samsum,
title={SAMSum Corpus: A Human-Annotated Dialogue Dataset for Abstractive Summarization},
author={Gliwa, Bogdan and Wójcik, Tomasz and Biega, Agnieszka and Marasek, Konrad},
booktitle={Proceedings of the 2nd Workshop on New Frontiers in Summarization},
year={2019}
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support