File size: 1,515 Bytes
c9af7de
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- text-classification
- text-classification
- sst2
- fine-tuned
language:
- en
datasets:
- sst2
pipeline_tag: text-classification
---

# bert-tiny-sst2

## Model Description

Fine-tuned BERT model for sentiment classification on SST-2 dataset

## Base Model
- **Base Model**: [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased)
- **Task**: text-classification
- **Dataset**: sst2

## Usage

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

tokenizer = AutoTokenizer.from_pretrained("takedarn/bert-tiny-sst2")
model = AutoModelForSequenceClassification.from_pretrained("takedarn/bert-tiny-sst2")

text = "This movie is great!"
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)

with torch.no_grad():
    outputs = model(**inputs)
    predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
    predicted_class = torch.argmax(predictions, dim=-1)
    
print(f"Predicted class: {predicted_class.item()}")
```

## Training Details

This model was fine-tuned using the following configuration:
- Task: text-classification
- Dataset: sst2
- Base model: google-bert/bert-base-uncased

## Citation

If you use this model, please cite:

```bibtex
@misc{bert_tiny_sst2,
  author = {Your Name},
  title = {bert-tiny-sst2},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/takedarn/bert-tiny-sst2}
}
```