# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("PoseyATX/Humiliated_Dolphin")
model = AutoModelForSeq2SeqLM.from_pretrained("PoseyATX/Humiliated_Dolphin")Quick Links
Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 2940685240
- CO2 Emissions (in grams): 0.3525
Validation Metrics
- Loss: 1.182
- Rouge1: 69.284
- Rouge2: 55.274
- RougeL: 61.472
- RougeLsum: 66.749
- Gen Len: 130.111
Usage
You can use cURL to access this model:
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/PoseyATX/autotrain-awfulnewdatatrain-2940685240
- Downloads last month
- 5
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="PoseyATX/Humiliated_Dolphin")