cardiffnlp/tweet_eval
Viewer • Updated • 201k • 35.4k • 141
How to use marcolatella/hate_trained with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="marcolatella/hate_trained") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("marcolatella/hate_trained")
model = AutoModelForSequenceClassification.from_pretrained("marcolatella/hate_trained")# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("marcolatella/hate_trained")
model = AutoModelForSequenceClassification.from_pretrained("marcolatella/hate_trained")This model is a fine-tuned version of distilbert-base-uncased on the tweet_eval dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | F1 |
|---|---|---|---|---|
| 0.4635 | 1.0 | 563 | 0.4997 | 0.7530 |
| 0.3287 | 2.0 | 1126 | 0.5138 | 0.7880 |
| 0.216 | 3.0 | 1689 | 0.6598 | 0.7821 |
| 0.1309 | 4.0 | 2252 | 0.8182 | 0.7876 |
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="marcolatella/hate_trained")