--- library_name: transformers license: apache-2.0 base_model: answerdotai/ModernBERT-large tags: - generated_from_trainer model-index: - name: binary_paragraph results: [] --- # binary_paragraph This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1836 - Classification Report: {'0': {'precision': 0.9434650455927052, 'recall': 0.9748743718592965, 'f1-score': 0.9589125733704047, 'support': 1592.0}, '1': {'precision': 0.8067632850241546, 'recall': 0.6423076923076924, 'f1-score': 0.715203426124197, 'support': 260.0}, 'accuracy': 0.9281857451403888, 'macro avg': {'precision': 0.8751141653084299, 'recall': 0.8085910320834944, 'f1-score': 0.8370579997473009, 'support': 1852.0}, 'weighted avg': {'precision': 0.9242736537202305, 'recall': 0.9281857451403888, 'f1-score': 0.9246985462192091, 'support': 1852.0}} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Classification Report | |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | No log | 1.0 | 98 | 0.2189 | {'0': {'precision': 0.9206631142687981, 'recall': 0.9767587939698492, 'f1-score': 0.9478817433709235, 'support': 1592.0}, '1': {'precision': 0.7730061349693251, 'recall': 0.4846153846153846, 'f1-score': 0.5957446808510638, 'support': 260.0}, 'accuracy': 0.9076673866090713, 'macro avg': {'precision': 0.8468346246190617, 'recall': 0.7306870892926169, 'f1-score': 0.7718132121109936, 'support': 1852.0}, 'weighted avg': {'precision': 0.8999337327256756, 'recall': 0.9076673866090713, 'f1-score': 0.8984456546802305, 'support': 1852.0}} | | No log | 2.0 | 196 | 0.2076 | {'0': {'precision': 0.9115606936416185, 'recall': 0.9905778894472361, 'f1-score': 0.9494280553883203, 'support': 1592.0}, '1': {'precision': 0.8770491803278688, 'recall': 0.4115384615384615, 'f1-score': 0.5602094240837696, 'support': 260.0}, 'accuracy': 0.9092872570194385, 'macro avg': {'precision': 0.8943049369847437, 'recall': 0.7010581754928489, 'f1-score': 0.754818739736045, 'support': 1852.0}, 'weighted avg': {'precision': 0.9067156647746775, 'recall': 0.9092872570194385, 'f1-score': 0.8947861309071199, 'support': 1852.0}} | | No log | 3.0 | 294 | 0.1875 | {'0': {'precision': 0.9410692588092345, 'recall': 0.9729899497487438, 'f1-score': 0.9567634342186535, 'support': 1592.0}, '1': {'precision': 0.7912621359223301, 'recall': 0.6269230769230769, 'f1-score': 0.6995708154506438, 'support': 260.0}, 'accuracy': 0.9244060475161987, 'macro avg': {'precision': 0.8661656973657823, 'recall': 0.7999565133359103, 'f1-score': 0.8281671248346487, 'support': 1852.0}, 'weighted avg': {'precision': 0.9200380212549175, 'recall': 0.9244060475161987, 'f1-score': 0.9206564791000345, 'support': 1852.0}} | | No log | 4.0 | 392 | 0.1924 | {'0': {'precision': 0.9565772669220945, 'recall': 0.9409547738693468, 'f1-score': 0.9487017099430018, 'support': 1592.0}, '1': {'precision': 0.6713286713286714, 'recall': 0.7384615384615385, 'f1-score': 0.7032967032967034, 'support': 260.0}, 'accuracy': 0.9125269978401728, 'macro avg': {'precision': 0.813952969125383, 'recall': 0.8397081561654427, 'f1-score': 0.8259992066198526, 'support': 1852.0}, 'weighted avg': {'precision': 0.9165315677567111, 'recall': 0.9125269978401728, 'f1-score': 0.9142496031784026, 'support': 1852.0}} | | No log | 5.0 | 490 | 0.1836 | {'0': {'precision': 0.9434650455927052, 'recall': 0.9748743718592965, 'f1-score': 0.9589125733704047, 'support': 1592.0}, '1': {'precision': 0.8067632850241546, 'recall': 0.6423076923076924, 'f1-score': 0.715203426124197, 'support': 260.0}, 'accuracy': 0.9281857451403888, 'macro avg': {'precision': 0.8751141653084299, 'recall': 0.8085910320834944, 'f1-score': 0.8370579997473009, 'support': 1852.0}, 'weighted avg': {'precision': 0.9242736537202305, 'recall': 0.9281857451403888, 'f1-score': 0.9246985462192091, 'support': 1852.0}} | ### Framework versions - Transformers 4.52.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1