Update README.md
Browse files
README.md
CHANGED
|
@@ -1,42 +1,27 @@
|
|
| 1 |
-
# Mini Transformer Sentiment Model
|
| 2 |
-
Uploaded via GitHub Actions
|
| 3 |
-
|
| 4 |
---
|
| 5 |
language: en
|
| 6 |
-
|
| 7 |
-
- custom
|
| 8 |
tags:
|
| 9 |
- sentiment-analysis
|
| 10 |
-
|
| 11 |
-
library_name: pytorch
|
| 12 |
pipeline_tag: text-classification
|
|
|
|
|
|
|
|
|
|
| 13 |
---
|
| 14 |
|
| 15 |
# Mini Transformer Sentiment Model
|
| 16 |
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
## 🧠 Model Details
|
| 21 |
-
|
| 22 |
-
- Architecture: Mini Transformer Encoder
|
| 23 |
-
- Framework: PyTorch
|
| 24 |
-
- Task: Sentiment Classification
|
| 25 |
-
- Input: Tokenized text
|
| 26 |
-
- Output: Sentiment label (0 = negative, 1 = positive)
|
| 27 |
-
|
| 28 |
-
## 🏋️♂️ Training
|
| 29 |
-
|
| 30 |
-
Trained using a custom CSV dataset with sentences and binary sentiment labels.
|
| 31 |
|
| 32 |
-
##
|
| 33 |
|
| 34 |
```python
|
| 35 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
| 36 |
|
| 37 |
-
model = AutoModelForSequenceClassification.from_pretrained("YOUR_USERNAME/mini-transformer-sentiment")
|
| 38 |
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
|
|
|
|
| 39 |
|
| 40 |
-
inputs = tokenizer("I love this
|
| 41 |
-
|
| 42 |
-
print(outputs.logits)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
language: en
|
| 3 |
+
license: mit
|
|
|
|
| 4 |
tags:
|
| 5 |
- sentiment-analysis
|
| 6 |
+
- mini-transformer
|
|
|
|
| 7 |
pipeline_tag: text-classification
|
| 8 |
+
library_name: pytorch
|
| 9 |
+
datasets:
|
| 10 |
+
- custom
|
| 11 |
---
|
| 12 |
|
| 13 |
# Mini Transformer Sentiment Model
|
| 14 |
|
| 15 |
+
A minimal Transformer encoder for sentiment classification.
|
| 16 |
+
Trained on a small custom dataset using PyTorch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
+
## Usage
|
| 19 |
|
| 20 |
```python
|
| 21 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
| 22 |
|
|
|
|
| 23 |
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
|
| 24 |
+
model = AutoModelForSequenceClassification.from_pretrained("mishrabp/mini-transformers")
|
| 25 |
|
| 26 |
+
inputs = tokenizer("I love this!", return_tensors="pt")
|
| 27 |
+
print(model(**inputs).logits)
|
|
|