harkase's picture
Update README.md
a2e0ba6 verified
metadata
library_name: transformers
license: apache-2.0
base_model: monologg/koelectra-base-v3-discriminator
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - f1
model-index:
  - name: MyMbti_classification_model
    results: []

MyMbti_classification_model

This model is a fine-tuned version of monologg/koelectra-base-v3-discriminator on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.5286
  • Accuracy: 0.1898
  • F1: 0.1547

Model description

์ด ๋ชจ๋ธ์€ 16๊ฐœ์˜ MBTI๋ฅผ ๋ผ๋ฒจ๋กœ ๋ถ„๋ฅ˜ํ•ด ํ•ด๋‹น ๋ผ๋ฒจ์„ ์˜ˆ์ธกํ•˜๋Š” ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ์ •ํ™•๋„๊ฐ€ ๋‚ฎ์€๊ฒƒ์€ ํ•™์Šต์— ์‚ฌ์šฉํ•œ ๋ฐ์ดํ„ฐ๊ฐ€ ์ •์ œ๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ์šฉ์œผ๋กœ ๋งŒ๋“ค์—ˆ๊ธฐ ๋•Œ๋ฌธ์— ์„ฑ๋Šฅ์€ ๋ณด์žฅํ•˜์ง€ ๋ชปํ•ฉ๋‹ˆ๋‹ค.

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
2.6213 0.1673 500 2.6180 0.1142 0.0241
2.6412 0.3347 1000 2.6167 0.1318 0.0336
2.5861 0.5020 1500 2.6111 0.1320 0.0385
2.6183 0.6693 2000 2.6133 0.1222 0.0461
2.5954 0.8367 2500 2.5958 0.1411 0.0607
2.5828 1.0040 3000 2.5822 0.1479 0.0703
2.5803 1.1714 3500 2.5685 0.1553 0.0826
2.5615 1.3387 4000 2.5566 0.1645 0.0977
2.5463 1.5060 4500 2.5531 0.1687 0.1111
2.5511 1.6734 5000 2.5446 0.1679 0.1170
2.5242 1.8407 5500 2.5342 0.1726 0.1215
2.5191 2.0080 6000 2.5246 0.1825 0.1384
2.4866 2.1754 6500 2.5306 0.1834 0.1428
2.5005 2.3427 7000 2.5325 0.1803 0.1399
2.5131 2.5100 7500 2.5195 0.1877 0.1473
2.4918 2.6774 8000 2.5204 0.1876 0.1489
2.4755 2.8447 8500 2.5218 0.1877 0.1568
2.4223 3.0120 9000 2.5286 0.1898 0.1547
2.4297 3.1794 9500 2.5364 0.1874 0.1599
2.4213 3.3467 10000 2.5432 0.1866 0.1584
2.4619 3.5141 10500 2.5393 0.1879 0.1585
2.4383 3.6814 11000 2.5424 0.1849 0.1590
2.4368 3.8487 11500 2.5414 0.1866 0.1599

Framework versions

  • Transformers 4.55.2
  • Pytorch 2.8.0+cu126
  • Datasets 4.0.0
  • Tokenizers 0.21.4