binary_paragraph / README.md
harun27's picture
End of training
a31fa9e verified
|
raw
history blame
4.89 kB
metadata
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-large
tags:
  - generated_from_trainer
model-index:
  - name: binary_paragraph
    results: []

binary_paragraph

This model is a fine-tuned version of answerdotai/ModernBERT-large on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2458
  • Classification Report: {'0': {'precision': 0.930564166150031, 'recall': 0.9428391959798995, 'f1-score': 0.9366614664586583, 'support': 1592.0}, '1': {'precision': 0.6192468619246861, 'recall': 0.5692307692307692, 'f1-score': 0.593186372745491, 'support': 260.0}, 'accuracy': 0.8903887688984882, 'macro avg': {'precision': 0.7749055140373586, 'recall': 0.7560349826053343, 'f1-score': 0.7649239196020747, 'support': 1852.0}, 'weighted avg': {'precision': 0.8868587130730388, 'recall': 0.8903887688984882, 'f1-score': 0.8884414209049739, 'support': 1852.0}}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • total_train_batch_size: 256
  • total_eval_batch_size: 256
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Classification Report
No log 1.0 25 0.2760 {'0': {'precision': 0.8786867000556483, 'recall': 0.9918341708542714, 'f1-score': 0.9318383003835939, 'support': 1592.0}, '1': {'precision': 0.7636363636363637, 'recall': 0.16153846153846155, 'f1-score': 0.26666666666666666, 'support': 260.0}, 'accuracy': 0.8752699784017278, 'macro avg': {'precision': 0.8211615318460059, 'recall': 0.5766863161963665, 'f1-score': 0.5992524835251303, 'support': 1852.0}, 'weighted avg': {'precision': 0.8625349249643881, 'recall': 0.8752699784017278, 'f1-score': 0.8384556736198784, 'support': 1852.0}}
No log 2.0 50 0.2626 {'0': {'precision': 0.8689240851993446, 'recall': 0.9993718592964824, 'f1-score': 0.9295939234589541, 'support': 1592.0}, '1': {'precision': 0.9523809523809523, 'recall': 0.07692307692307693, 'f1-score': 0.1423487544483986, 'support': 260.0}, 'accuracy': 0.8698704103671706, 'macro avg': {'precision': 0.9106525187901484, 'recall': 0.5381474681097796, 'f1-score': 0.5359713389536763, 'support': 1852.0}, 'weighted avg': {'precision': 0.8806404920390952, 'recall': 0.8698704103671706, 'f1-score': 0.81907354336028, 'support': 1852.0}}
No log 3.0 75 0.2458 {'0': {'precision': 0.930564166150031, 'recall': 0.9428391959798995, 'f1-score': 0.9366614664586583, 'support': 1592.0}, '1': {'precision': 0.6192468619246861, 'recall': 0.5692307692307692, 'f1-score': 0.593186372745491, 'support': 260.0}, 'accuracy': 0.8903887688984882, 'macro avg': {'precision': 0.7749055140373586, 'recall': 0.7560349826053343, 'f1-score': 0.7649239196020747, 'support': 1852.0}, 'weighted avg': {'precision': 0.8868587130730388, 'recall': 0.8903887688984882, 'f1-score': 0.8884414209049739, 'support': 1852.0}}

Framework versions

  • Transformers 4.52.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1