--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: distilbert-base-uncased metrics: - precision - recall - accuracy model-index: - name: multilabel_classification results: [] --- # multilabel_classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2810 - F1 Micro: 0.8770 - F1 Macro: 0.7787 - F1 Weighted: 0.8672 - Precision: 0.8702 - Recall: 0.8770 - Accuracy: 0.8770 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | F1 Weighted | Precision | Recall | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------:|:---------:|:------:|:--------:| | No log | 1.0 | 406 | 0.2865 | 0.8643 | 0.7287 | 0.8438 | 0.8620 | 0.8643 | 0.8643 | | 0.2729 | 2.0 | 812 | 0.2924 | 0.8737 | 0.7671 | 0.8616 | 0.8671 | 0.8737 | 0.8737 | | 0.216 | 3.0 | 1218 | 0.2810 | 0.8770 | 0.7787 | 0.8672 | 0.8702 | 0.8770 | 0.8770 | | 0.1868 | 4.0 | 1624 | 0.2813 | 0.8787 | 0.7802 | 0.8685 | 0.8725 | 0.8787 | 0.8787 | | 0.1728 | 5.0 | 2030 | 0.2944 | 0.8748 | 0.7794 | 0.8664 | 0.8673 | 0.8748 | 0.8748 | | 0.1728 | 6.0 | 2436 | 0.2937 | 0.8825 | 0.7967 | 0.8760 | 0.8762 | 0.8825 | 0.8825 | | 0.155 | 7.0 | 2842 | 0.3007 | 0.8848 | 0.8039 | 0.8795 | 0.8789 | 0.8848 | 0.8848 | | 0.151 | 8.0 | 3248 | 0.3007 | 0.8875 | 0.8070 | 0.8818 | 0.8819 | 0.8875 | 0.8875 | | 0.1359 | 9.0 | 3654 | 0.3031 | 0.8870 | 0.8077 | 0.8818 | 0.8814 | 0.8870 | 0.8870 | | 0.1359 | 10.0 | 4060 | 0.3035 | 0.8881 | 0.8086 | 0.8826 | 0.8826 | 0.8881 | 0.8881 | ### Framework versions - PEFT 0.11.1 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.19.1 - Tokenizers 0.15.1