ModernBERT-regulation-classifier

This model is a fine-tuned version of answerdotai/ModernBERT-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3695
  • F1: 0.9252

Model description

More information needed

Intended uses & limitations

This is a model trained on a custom dataset for classification. It is not likely to be useful to others, unfortunately.

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss F1
No log 1.0 15 0.5181 0.7481
No log 2.0 30 0.3811 0.8373
No log 3.0 45 0.6849 0.6865
No log 4.0 60 0.4782 0.8611
No log 5.0 75 0.2552 0.9376
No log 6.0 90 0.3630 0.9127
0.2889 7.0 105 0.4094 0.8618
0.2889 8.0 120 0.3934 0.8997
0.2889 9.0 135 0.3548 0.9376
0.2889 10.0 150 0.4377 0.8746
0.2889 11.0 165 0.4106 0.9126
0.2889 12.0 180 0.4450 0.8997
0.2889 13.0 195 0.3728 0.9376
0.0041 14.0 210 0.3698 0.9252
0.0041 15.0 225 0.3708 0.9252
0.0041 16.0 240 0.3696 0.9252
0.0041 17.0 255 0.3703 0.9252
0.0041 18.0 270 0.3718 0.9252
0.0041 19.0 285 0.3722 0.9252
0.0 20.0 300 0.3695 0.9252

Framework versions

  • Transformers 4.48.0.dev0
  • Pytorch 2.6.0+cu124
  • Datasets 3.1.0
  • Tokenizers 0.21.1
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for travis-simon/ModernBERT-regulation-classifier

Finetuned
(1120)
this model