How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("token-classification", model="Buseak/model_from_berturk_Feb_5_TrainTestSplit")
# Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification

tokenizer = AutoTokenizer.from_pretrained("Buseak/model_from_berturk_Feb_5_TrainTestSplit")
model = AutoModelForTokenClassification.from_pretrained("Buseak/model_from_berturk_Feb_5_TrainTestSplit")
Quick Links

model_from_berturk_Feb_5_TrainTestSplit

This model is a fine-tuned version of dbmdz/bert-base-turkish-cased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3125
  • Precision: 0.9120
  • Recall: 0.9126
  • F1: 0.9123
  • Accuracy: 0.9376

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 1.0 185 0.2333 0.9065 0.9066 0.9066 0.9343
No log 2.0 370 0.2115 0.9122 0.9143 0.9133 0.9389
0.3861 3.0 555 0.2049 0.9185 0.9175 0.9180 0.9423
0.3861 4.0 740 0.2073 0.9183 0.9185 0.9184 0.9420
0.3861 5.0 925 0.2174 0.9150 0.9155 0.9153 0.9397
0.1487 6.0 1110 0.2227 0.9177 0.9185 0.9181 0.9415
0.1487 7.0 1295 0.2399 0.9149 0.9160 0.9155 0.9396
0.1487 8.0 1480 0.2504 0.9158 0.9163 0.9160 0.9400
0.0942 9.0 1665 0.2692 0.9141 0.9152 0.9146 0.9392
0.0942 10.0 1850 0.2782 0.9130 0.9153 0.9141 0.9388
0.0589 11.0 2035 0.2908 0.9131 0.9144 0.9138 0.9388
0.0589 12.0 2220 0.2940 0.9121 0.9136 0.9128 0.9377
0.0589 13.0 2405 0.3068 0.9117 0.9130 0.9123 0.9376
0.0407 14.0 2590 0.3107 0.9132 0.9148 0.9140 0.9387
0.0407 15.0 2775 0.3125 0.9120 0.9126 0.9123 0.9376

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.13.1+cu116
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support