NHS TransformersUpdates
Collection
12 items • Updated
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("intermezzo672/NHS-pubmedbert-binary")
model = AutoModelForSequenceClassification.from_pretrained("intermezzo672/NHS-pubmedbert-binary")This model is a fine-tuned version of microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|---|---|---|---|---|---|---|---|
| 0.0592 | 1.0 | 397 | 0.3980 | 0.8246 | 0.8179 | 0.8220 | 0.8196 |
| 0.0542 | 2.0 | 794 | 0.4714 | 0.7943 | 0.7960 | 0.8064 | 0.7929 |
| 0.6307 | 3.0 | 1191 | 0.5861 | 0.8246 | 0.8180 | 0.8240 | 0.8202 |
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="intermezzo672/NHS-pubmedbert-binary")