File size: 2,553 Bytes
70879f7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
language: 
- en
library_name: transformers
tags:
- ensemble
- text-classification
- sentiment-analysis
- imdb
license: apache-2.0
datasets:
- imdb
metrics:
- accuracy
- f1
pipeline_tag: text-classification
base_model: 
- bert-base-uncased
model-index:
- name: BERT IMDb Ensemble for Sentiment Analysis
  results:
  - task:
      type: text-classification
      name: Sentiment Classification
    dataset:
      name: IMDb
      type: imdb
      split: test
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.939
    - name: F1
      type: f1
      value: 0.939
---

# BERT IMDb Ensemble for Sentiment Analysis 🎬🎭

## Model description

This is an **ensemble of 3 BERT-base-uncased models** fine-tuned on the IMDb dataset for **binary sentiment classification** (positive vs. negative reviews).  
Each model was trained with a different random seed, and predictions are combined using weighted or unweighted averaging for more robust performance.

- **Base model:** `bert-base-uncased`  
- **Task:** Sentiment classification (binary: 0 = negative, 1 = positive)  
- **Ensembling strategy:** Weighted logits averaging  

---

## Training procedure

- **Dataset:** IMDb (train/test split from Hugging Face `datasets`)  
- **Preprocessing:**
  - Tokenization with `bert-base-uncased`
  - Truncation at 512 tokens  

- **Hyperparameters:**
  - Epochs: 2  
  - Batch size: 8  
  - Optimizer: AdamW (default in `Trainer`)  
  - FP16: Enabled  
  - Seeds: `[42, 123, 999]`  

---

## Evaluation results

Across the three models, results are very consistent:

| Model (Seed) | Epochs | Val. Accuracy | Val. Macro F1 |
|--------------|--------|---------------|---------------|
| 42           | 2      | 93.74%        | 0.9374        |
| 123          | 2      | 93.84%        | 0.9383        |
| 999          | 2      | 93.98%        | 0.9398        |

**Ensemble performance** (weighted example `[0.2, 0.2, 0.6]`) improves stability and helps reduce variance across seeds.

---

## How to use

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

tokenizer = AutoTokenizer.from_pretrained("ByteMeHarder-404/bert-imdb-ensemble")
model = AutoModelForSequenceClassification.from_pretrained("ByteMeHarder-404/bert-imdb-ensemble")

inputs = tokenizer("This movie was an absolute masterpiece!", return_tensors="pt")
with torch.no_grad():
    outputs = model(**inputs)
    probs = torch.nn.functional.softmax(outputs.logits, dim=-1)

print(probs)  # tensor([[0.01, 0.99]]) -> positive sentiment