--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-ner-essays-find_span results: [] --- # bert-ner-essays-find_span This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1978 - B-span: {'precision': 0.8451327433628318, 'recall': 0.8856259659969088, 'f1-score': 0.8649056603773585, 'support': 647.0} - I-span: {'precision': 0.9613473219215903, 'recall': 0.9557182067703568, 'f1-score': 0.9585244999082401, 'support': 10930.0} - O: {'precision': 0.89764120320277, 'recall': 0.9040976460331299, 'f1-score': 0.9008578564447822, 'support': 4588.0} - Accuracy: 0.9383 - Macro avg: {'precision': 0.9013737561623975, 'recall': 0.9151472729334652, 'f1-score': 0.9080960055767937, 'support': 16165.0} - Weighted avg: {'precision': 0.9386145965884964, 'recall': 0.9382616764614908, 'f1-score': 0.9384103056993428, 'support': 16165.0} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | B-span | I-span | O | Accuracy | Macro avg | Weighted avg | |:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:| | No log | 1.0 | 196 | 0.1948 | {'precision': 0.8323076923076923, 'recall': 0.8361669242658424, 'f1-score': 0.8342328450269854, 'support': 647.0} | {'precision': 0.9544583371360774, 'recall': 0.9568161024702653, 'f1-score': 0.9556357655229132, 'support': 10930.0} | {'precision': 0.8977621763931549, 'recall': 0.8918918918918919, 'f1-score': 0.89481740651651, 'support': 4588.0} | 0.9336 | {'precision': 0.8948427352789748, 'recall': 0.894958306209333, 'f1-score': 0.8948953390221361, 'support': 16165.0} | {'precision': 0.9334776100904544, 'recall': 0.9335601608413239, 'f1-score': 0.9335149909678719, 'support': 16165.0} | | No log | 2.0 | 392 | 0.1840 | {'precision': 0.8016528925619835, 'recall': 0.8995363214837713, 'f1-score': 0.8477785870356882, 'support': 647.0} | {'precision': 0.9520368530394725, 'recall': 0.9643183897529735, 'f1-score': 0.9581382664424344, 'support': 10930.0} | {'precision': 0.9198717948717948, 'recall': 0.8757628596338274, 'f1-score': 0.8972755694506476, 'support': 4588.0} | 0.9366 | {'precision': 0.8911871801577503, 'recall': 0.9132058569568574, 'f1-score': 0.9010641409762568, 'support': 16165.0} | {'precision': 0.936888587694453, 'recall': 0.9365914011753789, 'f1-score': 0.936446910650632, 'support': 16165.0} | | 0.2568 | 3.0 | 588 | 0.1978 | {'precision': 0.8451327433628318, 'recall': 0.8856259659969088, 'f1-score': 0.8649056603773585, 'support': 647.0} | {'precision': 0.9613473219215903, 'recall': 0.9557182067703568, 'f1-score': 0.9585244999082401, 'support': 10930.0} | {'precision': 0.89764120320277, 'recall': 0.9040976460331299, 'f1-score': 0.9008578564447822, 'support': 4588.0} | 0.9383 | {'precision': 0.9013737561623975, 'recall': 0.9151472729334652, 'f1-score': 0.9080960055767937, 'support': 16165.0} | {'precision': 0.9386145965884964, 'recall': 0.9382616764614908, 'f1-score': 0.9384103056993428, 'support': 16165.0} | ### Framework versions - Transformers 4.37.1 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1