leondz/wnut_17
Updated • 4.47k • 19
How to use Hemg/token-classification with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="Hemg/token-classification") # Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Hemg/token-classification")
model = AutoModelForTokenClassification.from_pretrained("Hemg/token-classification")This model is a fine-tuned version of distilbert-base-uncased on the wnut_17 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|---|---|---|---|---|---|---|---|
| No log | 1.0 | 107 | 0.3021 | 0.4217 | 0.1548 | 0.2264 | 0.9342 |
| No log | 2.0 | 214 | 0.2813 | 0.5269 | 0.2817 | 0.3671 | 0.9395 |
Base model
distilbert/distilbert-base-uncased