# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("WilliamWen/summarization_02")
model = AutoModelForSeq2SeqLM.from_pretrained("WilliamWen/summarization_02")Quick Links
Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 48234117386
- CO2 Emissions (in grams): 6.0806
Validation Metrics
- Loss: 2.396
- Rouge1: 37.261
- Rouge2: 10.823
- RougeL: 20.762
- RougeLsum: 32.576
- Gen Len: 141.653
Usage
You can use cURL to access this model:
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/WilliamWen/autotrain-summarization_02-48234117386
- Downloads last month
- 6
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="WilliamWen/summarization_02")