Files changed (1) hide show
  1. README.md +88 -1
README.md CHANGED
@@ -6,4 +6,91 @@ base_model:
6
  - google-bert/bert-base-uncased
7
  tags:
8
  - sentiment_analysis
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - google-bert/bert-base-uncased
7
  tags:
8
  - sentiment_analysis
9
+ ---
10
+
11
+ # BERT Sentiment Analysis πŸš€
12
+
13
+ This is a fine-tuned BERT model (`bert-base-uncased`) for **sentiment analysis** on the IMDb movie review dataset. The model classifies text into:
14
+
15
+ - βœ… Positive
16
+ - ❌ Negative
17
+
18
+ ---
19
+
20
+ ## 🧠 Model Details
21
+
22
+ - **Base Model:** `bert-base-uncased`
23
+ - **Dataset:** IMDb (via Hugging Face Datasets)
24
+ - **Classes:** Binary classification (0: Positive, 1: Negative)
25
+ - **Framework:** PyTorch
26
+ - **Training:** Fine-tuned using Hugging Face Transformers and Datasets on GPU
27
+
28
+ ---
29
+
30
+ ## πŸ“₯ How to Use
31
+
32
+ You can use this model directly with πŸ€— `transformers`:
33
+
34
+ ```python
35
+ from transformers import BertTokenizer, BertForSequenceClassification
36
+ import torch
37
+ import torch.nn.functional as F
38
+
39
+ model_name = "saubhagya122k4/bert-sentiment-analysis"
40
+
41
+ tokenizer = BertTokenizer.from_pretrained(model_name)
42
+ model = BertForSequenceClassification.from_pretrained(model_name)
43
+ model.eval()
44
+
45
+ text = "This movie was fantastic! I really enjoyed it."
46
+
47
+ inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=70)
48
+
49
+ with torch.no_grad():
50
+ outputs = model(**inputs)
51
+ probs = F.softmax(outputs.logits, dim=1)
52
+ predicted = torch.argmax(probs, dim=1).item()
53
+
54
+ labels = {0: "Positive", 1: "Negative"}
55
+ print(f"Sentiment: {labels[predicted]}")
56
+ ````
57
+
58
+ ---
59
+
60
+ ## πŸ“Š Performance
61
+
62
+ * **Accuracy:** \~93% on test set
63
+ * **Tokenizer:** `bert-base-uncased`
64
+ * **Sequence Length:** 70 tokens
65
+
66
+ ---
67
+
68
+ ## πŸ›  Training & Fine-tuning
69
+
70
+ * Notebook: [View on Kaggle](https://www.kaggle.com/code/saubhagyavishwakarma/sentiment-bert-model?scriptVersionId=247854860)
71
+ * Framework: PyTorch + Hugging Face Transformers
72
+ * Batch Size: 16
73
+ * Epochs: 3
74
+ * Optimizer: AdamW
75
+
76
+ ---
77
+
78
+ ## πŸ“¦ Use Cases
79
+
80
+ * Movie review classification
81
+ * Customer feedback analysis
82
+ * Product sentiment detection
83
+
84
+ ---
85
+
86
+ ## 🧾 License
87
+
88
+ This model is available for public use under the **Apache 2.0**.
89
+
90
+ ---
91
+
92
+ ## πŸ™‹β€β™‚οΈ Author
93
+
94
+ **Saubhagya Vishwakarma**
95
+ πŸ“§ [saubhagya.v@simformsolutions.com](mailto:saubhagyah5331@gmail.com)
96
+ πŸ”— [GitHub](https://github.com/Saubhagyah5331)