Homer-Lin commited on
Commit
bb53949
·
verified ·
1 Parent(s): 1f459bb

init: 5761筆資料訓練

Browse files

===訓練資料===
pre_train_model: bert-base-chinese
trainset: ['/content/drive/MyDrive/model_check_ai_context/train_data/train_2026-03-18_5761.csv']
data_size: 4598
train_size: 3678
train_rate: 0.8
label_count:
source
0 2299
1 2299
validate_rate: 0.1

===超參數設定===
epochs: 3
lr: 1.5e-05
batch_size: 24

===訓練結果===
total_loss: 0.3712
train_accuracy: 0.8381
val_loss: 0.4680
val_accuracy: 0.7826

===訓練歷程===
Epoch 1/3 | Train Loss: 0.4719 | Train Acc: 0.7749 || Val Loss: 0.4604 | Val Acc: 0.7690
Epoch 2/3 | Train Loss: 0.4124 | Train Acc: 0.8127 || Val Loss: 0.4709 | Val Acc: 0.7527
Epoch 3/3 | Train Loss: 0.3712 | Train Acc: 0.8381 || Val Loss: 0.4680 | Val Acc: 0.7826

Files changed (2) hide show
  1. tokenizer.json +0 -0
  2. tokenizer_config.json +14 -0
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "backend": "tokenizers",
3
+ "cls_token": "[CLS]",
4
+ "do_lower_case": false,
5
+ "is_local": false,
6
+ "mask_token": "[MASK]",
7
+ "model_max_length": 512,
8
+ "pad_token": "[PAD]",
9
+ "sep_token": "[SEP]",
10
+ "strip_accents": null,
11
+ "tokenize_chinese_chars": true,
12
+ "tokenizer_class": "BertTokenizer",
13
+ "unk_token": "[UNK]"
14
+ }