|
|
--- |
|
|
library_name: transformers |
|
|
license: apache-2.0 |
|
|
base_model: answerdotai/ModernBERT-base |
|
|
tags: |
|
|
- generated_from_trainer |
|
|
model-index: |
|
|
- name: ModernBERT-base-ft-code-defect-detection-10e-4k |
|
|
results: [] |
|
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
# ModernBERT-base-ft-code-defect-detection-10e-4k |
|
|
|
|
|
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on an unknown dataset. |
|
|
It achieves the following results on the evaluation set: |
|
|
- Loss: 2.0516 |
|
|
- Accuracy Score: 0.6369 |
|
|
- F1 Score: 0.6091 |
|
|
- Precision Score: 0.6159 |
|
|
- Recall Score: 0.6025 |
|
|
|
|
|
## Model description |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training and evaluation data |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training procedure |
|
|
|
|
|
### Training hyperparameters |
|
|
|
|
|
The following hyperparameters were used during training: |
|
|
- learning_rate: 8e-05 |
|
|
- train_batch_size: 64 |
|
|
- eval_batch_size: 64 |
|
|
- seed: 42 |
|
|
- optimizer: Use adamw_torch with betas=(0.9,0.98) and epsilon=1e-06 and optimizer_args=No additional optimizer arguments |
|
|
- lr_scheduler_type: linear |
|
|
- num_epochs: 10 |
|
|
|
|
|
### Training results |
|
|
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | F1 Score | Precision Score | Recall Score | |
|
|
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:--------:|:---------------:|:------------:| |
|
|
| 0.6768 | 1.0 | 342 | 0.6130 | 0.6358 | 0.5728 | 0.5315 | 0.6210 | |
|
|
| 0.5902 | 2.0 | 684 | 0.5828 | 0.6654 | 0.5421 | 0.4311 | 0.7301 | |
|
|
| 0.5346 | 3.0 | 1026 | 0.5995 | 0.6585 | 0.4744 | 0.3355 | 0.8096 | |
|
|
| 0.4583 | 4.0 | 1368 | 0.6115 | 0.6812 | 0.6085 | 0.5394 | 0.6979 | |
|
|
| 0.3722 | 5.0 | 1710 | 0.6749 | 0.6482 | 0.6197 | 0.6239 | 0.6156 | |
|
|
| 0.2896 | 6.0 | 2052 | 0.8197 | 0.6490 | 0.6087 | 0.5944 | 0.6237 | |
|
|
| 0.2234 | 7.0 | 2394 | 0.9451 | 0.6490 | 0.6019 | 0.5777 | 0.6282 | |
|
|
| 0.1655 | 8.0 | 2736 | 1.1632 | 0.6354 | 0.6115 | 0.6247 | 0.5989 | |
|
|
| 0.1151 | 9.0 | 3078 | 1.4168 | 0.6387 | 0.6063 | 0.6056 | 0.6070 | |
|
|
| 0.0684 | 10.0 | 3420 | 2.0516 | 0.6369 | 0.6091 | 0.6159 | 0.6025 | |
|
|
|
|
|
|
|
|
### Framework versions |
|
|
|
|
|
- Transformers 4.48.0.dev0 |
|
|
- Pytorch 2.5.1+cu121 |
|
|
- Datasets 3.2.0 |
|
|
- Tokenizers 0.21.0 |
|
|
|