mishrabp commited on
Commit
a91d102
·
verified ·
1 Parent(s): ccda66e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -26
README.md CHANGED
@@ -1,42 +1,27 @@
1
- # Mini Transformer Sentiment Model
2
- Uploaded via GitHub Actions
3
-
4
  ---
5
  language: en
6
- datasets:
7
- - custom
8
  tags:
9
  - sentiment-analysis
10
- license: mit
11
- library_name: pytorch
12
  pipeline_tag: text-classification
 
 
 
13
  ---
14
 
15
  # Mini Transformer Sentiment Model
16
 
17
- This is a lightweight Transformer encoder trained for sentiment analysis (positive/negative).
18
- Built and uploaded automatically using a GitHub Actions pipeline.
19
-
20
- ## 🧠 Model Details
21
-
22
- - Architecture: Mini Transformer Encoder
23
- - Framework: PyTorch
24
- - Task: Sentiment Classification
25
- - Input: Tokenized text
26
- - Output: Sentiment label (0 = negative, 1 = positive)
27
-
28
- ## 🏋️‍♂️ Training
29
-
30
- Trained using a custom CSV dataset with sentences and binary sentiment labels.
31
 
32
- ## 🧩 Usage
33
 
34
  ```python
35
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
36
 
37
- model = AutoModelForSequenceClassification.from_pretrained("YOUR_USERNAME/mini-transformer-sentiment")
38
  tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
 
39
 
40
- inputs = tokenizer("I love this product!", return_tensors="pt")
41
- outputs = model(**inputs)
42
- print(outputs.logits)
 
 
 
 
1
  ---
2
  language: en
3
+ license: mit
 
4
  tags:
5
  - sentiment-analysis
6
+ - mini-transformer
 
7
  pipeline_tag: text-classification
8
+ library_name: pytorch
9
+ datasets:
10
+ - custom
11
  ---
12
 
13
  # Mini Transformer Sentiment Model
14
 
15
+ A minimal Transformer encoder for sentiment classification.
16
+ Trained on a small custom dataset using PyTorch.
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
+ ## Usage
19
 
20
  ```python
21
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
22
 
 
23
  tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
24
+ model = AutoModelForSequenceClassification.from_pretrained("mishrabp/mini-transformers")
25
 
26
+ inputs = tokenizer("I love this!", return_tensors="pt")
27
+ print(model(**inputs).logits)