--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer - text-classification - transformers - bert metrics: - accuracy model-index: - name: bert-finetuned-sst2 results: [] --- # bert-finetuned-sst2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3812 - Accuracy: 0.9083 # Load model directly ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("execbat/bert-finetuned-sst2") model = AutoModelForSequenceClassification.from_pretrained("execbat/bert-finetuned-sst2") ``` ## Use a pipeline as a high-level helper ```python from transformers import pipeline label_tags = {'LABEL_0' : "NEGATIVE", 'LABEL_1' : "POSITIVE"} pipe = pipeline("text-classification", model="execbat/bert-finetuned-sst2") result = pipe(["what a horrible day!", "what a wonderfull day!"]) encoded_result = [label_tags[i["label"]] for i in result] print(encoded_result) ``` ```python ['NEGATIVE', 'POSITIVE'] ``` ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.269 | 1.0 | 8419 | 0.5041 | 0.8716 | | 0.1854 | 2.0 | 16838 | 0.4296 | 0.8968 | | 0.0993 | 3.0 | 25257 | 0.3812 | 0.9083 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0 - Datasets 3.3.2 - Tokenizers 0.21.0