ncbi/ncbi_disease
Updated • 4.05k • 52
How to use apriadiazriel/bert_base_ncbi with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="apriadiazriel/bert_base_ncbi") # Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("apriadiazriel/bert_base_ncbi")
model = AutoModelForTokenClassification.from_pretrained("apriadiazriel/bert_base_ncbi")This model is a fine-tuned version of bert-base-uncased on the NCBI disease dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Train Loss | Validation Loss | Precision | Recall | F1 | Accuracy | Epoch |
|---|---|---|---|---|---|---|
| 0.1130 | 0.0547 | 0.7364 | 0.7916 | 0.7630 | 0.9832 | 0 |
| 0.0335 | 0.0497 | 0.7836 | 0.8513 | 0.8161 | 0.9850 | 1 |
| 0.0213 | 0.0518 | 0.8 | 0.8640 | 0.8308 | 0.9860 | 2 |
| 0.0166 | 0.0518 | 0.8 | 0.8640 | 0.8308 | 0.9860 | 3 |
| 0.0173 | 0.0518 | 0.8 | 0.8640 | 0.8308 | 0.9860 | 4 |
| 0.0174 | 0.0518 | 0.8 | 0.8640 | 0.8308 | 0.9860 | 5 |
| 0.0168 | 0.0518 | 0.8 | 0.8640 | 0.8308 | 0.9860 | 6 |
| 0.0172 | 0.0518 | 0.8 | 0.8640 | 0.8308 | 0.9860 | 7 |
| 0.0167 | 0.0518 | 0.8 | 0.8640 | 0.8308 | 0.9860 | 8 |
| 0.0168 | 0.0518 | 0.8 | 0.8640 | 0.8308 | 0.9860 | 9 |
Base model
google-bert/bert-base-uncased