Instructions to use AkashKhamkar/InSumT510k with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use AkashKhamkar/InSumT510k with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("AkashKhamkar/InSumT510k") model = AutoModelForSeq2SeqLM.from_pretrained("AkashKhamkar/InSumT510k") - Notebooks
- Google Colab
- Kaggle
About : This model can be used for text summarization.
The dataset on which it was fine tuned consisted of 10,323 articles.
The Data Fields :
- "Headline" : title of the article
- "articleBody" : the main article content
- "source" : the link to the readmore page.
The data splits were :
- Train : 8258.
- Vaildation : 2065.
How to use along with pipeline
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForSeq2Seq
tokenizer = AutoTokenizer.from_pretrained("AkashKhamkar/InSumT510k")
model = AutoModelForSeq2SeqLM.from_pretrained("AkashKhamkar/InSumT510k")
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer)
summarizer("Text for summarization...", min_length=5, max_length=50)
language:
- English
library_name: Pytorch
tags: - Summarization - T5-base - Conditional Modelling
- Downloads last month
- 5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support