turkish-sentiment3 / README.md
dexter231's picture
Update README.md
a4883a4 verified
metadata
library_name: transformers
license: mit
base_model: dbmdz/bert-base-turkish-cased
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - f1
  - precision
  - recall
model-index:
  - name: turkish-sentiment3
    results: []
datasets:
  - winvoker/turkish-sentiment-analysis-dataset

turkish-sentiment3

This model is a fine-tuned version of dbmdz/bert-base-turkish-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0876
  • Accuracy: 0.9688
  • F1: 0.9450
  • Precision: 0.9525
  • Recall: 0.9381
  • Positive F1: 0.9713
  • Positive Precision: 0.9650
  • Positive Recall: 0.9776
  • Neutral F1: 0.9981
  • Neutral Precision: 0.9977
  • Neutral Recall: 0.9985
  • Negative F1: 0.8655
  • Negative Precision: 0.8947
  • Negative Recall: 0.8382

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 64
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 400
  • training_steps: 1600

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall Positive F1 Positive Precision Positive Recall Neutral F1 Neutral Precision Neutral Recall Negative F1 Negative Precision Negative Recall
0.2262 0.0290 200 0.1398 0.9485 0.9152 0.9084 0.9227 0.9525 0.9626 0.9426 0.9929 0.9886 0.9972 0.8003 0.7739 0.8285
0.1265 0.0581 400 0.1125 0.9590 0.9277 0.9369 0.9195 0.9626 0.9545 0.9709 0.9963 0.9961 0.9965 0.8241 0.8601 0.7910
0.1213 0.0871 600 0.1270 0.9545 0.9118 0.9573 0.8834 0.9593 0.9307 0.9896 0.9961 0.9953 0.9970 0.7801 0.9458 0.6637
0.0995 0.1162 800 0.1171 0.9606 0.9339 0.9284 0.9399 0.9636 0.9690 0.9582 0.9967 0.9979 0.9956 0.8415 0.8184 0.8660
0.108 0.1452 1000 0.0944 0.9667 0.9413 0.9513 0.9325 0.9697 0.9628 0.9768 0.9964 0.9935 0.9993 0.8578 0.8976 0.8214
0.0956 0.1743 1200 0.0944 0.9655 0.9418 0.9369 0.9470 0.9679 0.9738 0.9622 0.9976 0.9961 0.9991 0.8598 0.8408 0.8796
0.0971 0.2033 1400 0.0874 0.9688 0.9449 0.9533 0.9372 0.9713 0.9643 0.9784 0.9981 0.9978 0.9984 0.8652 0.8977 0.8349
0.0887 0.2324 1600 0.0876 0.9688 0.9450 0.9525 0.9381 0.9713 0.9650 0.9776 0.9981 0.9977 0.9985 0.8655 0.8947 0.8382

Framework versions

  • Transformers 4.52.4
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.2