bert-medium-tiny / README.md
takedarn's picture
Add README.md template
12d76c0 verified
---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- text-classification
- text-classification
- sst2
- fine-tuned
language:
- en
datasets:
- sst2
pipeline_tag: text-classification
---
# bert-medium-tiny
## Model Description
Fine-tuned BERT model for sentiment classification on SST-2 dataset
## Base Model
- **Base Model**: [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased)
- **Task**: text-classification
- **Dataset**: sst2
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("takedarn/bert-medium-tiny")
model = AutoModelForSequenceClassification.from_pretrained("takedarn/bert-medium-tiny")
text = "This movie is great!"
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(predictions, dim=-1)
print(f"Predicted class: {predicted_class.item()}")
```
## Training Details
This model was fine-tuned using the following configuration:
- Task: text-classification
- Dataset: sst2
- Base model: google-bert/bert-base-uncased
## Citation
If you use this model, please cite:
```bibtex
@misc{bert_medium_tiny,
author = {Your Name},
title = {bert-medium-tiny},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/takedarn/bert-medium-tiny}
}
```