|
|
--- |
|
|
license: apache-2.0 |
|
|
base_model: google-bert/bert-base-uncased |
|
|
tags: |
|
|
- text-classification |
|
|
- text-classification |
|
|
- sst2 |
|
|
- fine-tuned |
|
|
language: |
|
|
- en |
|
|
datasets: |
|
|
- sst2 |
|
|
pipeline_tag: text-classification |
|
|
--- |
|
|
|
|
|
# bert-tiny-sst2 |
|
|
|
|
|
## Model Description |
|
|
|
|
|
Fine-tuned BERT model for sentiment classification on SST-2 dataset |
|
|
|
|
|
## Base Model |
|
|
- **Base Model**: [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) |
|
|
- **Task**: text-classification |
|
|
- **Dataset**: sst2 |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification |
|
|
import torch |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained("takedarn/bert-tiny-sst2") |
|
|
model = AutoModelForSequenceClassification.from_pretrained("takedarn/bert-tiny-sst2") |
|
|
|
|
|
text = "This movie is great!" |
|
|
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True) |
|
|
|
|
|
with torch.no_grad(): |
|
|
outputs = model(**inputs) |
|
|
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1) |
|
|
predicted_class = torch.argmax(predictions, dim=-1) |
|
|
|
|
|
print(f"Predicted class: {predicted_class.item()}") |
|
|
``` |
|
|
|
|
|
## Training Details |
|
|
|
|
|
This model was fine-tuned using the following configuration: |
|
|
- Task: text-classification |
|
|
- Dataset: sst2 |
|
|
- Base model: google-bert/bert-base-uncased |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this model, please cite: |
|
|
|
|
|
```bibtex |
|
|
@misc{bert_tiny_sst2, |
|
|
author = {Your Name}, |
|
|
title = {bert-tiny-sst2}, |
|
|
year = {2025}, |
|
|
publisher = {Hugging Face}, |
|
|
url = {https://huggingface.co/takedarn/bert-tiny-sst2} |
|
|
} |
|
|
``` |
|
|
|