Saif10 commited on
Commit
ce3461f
Β·
verified Β·
1 Parent(s): 52c0746
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - gpt2
7
+ - rlhf
8
+ - sentiment-analysis
9
+ - sft
10
+ - transformers
11
+ library_name: transformers
12
+ datasets:
13
+ - stanfordnlp/sst2
14
+ base_model:
15
+ - openai-community/gpt2
16
+ ---
17
+
18
+ # 🧠 GPT-2 SFT Model – Supervised Fine-Tuning for Positive Sentiment
19
+
20
+ This model is the **first stage** in a 3-step RLHF (Reinforcement Learning from Human Feedback) pipeline using **GPT-2**. It has been fine-tuned on the **Stanford Sentiment Treebank v2 (SST2)** dataset, focusing on generating sentences with a positive sentiment tone.
21
+
22
+ ---
23
+
24
+ ## πŸ“Œ Context
25
+
26
+ This model is part of the following RLHF project structure:
27
+
28
+ 1. **Supervised Fine-Tuning (SFT)** – Fine-tunes GPT-2 on positive/negative sentences.
29
+ 2. **Reward Model (RM)** – Trained to predict sentiment scores.
30
+ 3. **PPO-based Optimization (RLHF)** – Final model improved to generate high-reward (positive) responses.
31
+
32
+ You are currently viewing the **SFT model**.
33
+
34
+ ---
35
+
36
+ ## βœ… Model Objective
37
+
38
+ Train GPT-2 on sentiment-labeled sentences to mimic human-like, sentiment-aware generation.
39
+
40
+ - **Input:** Sentence start (prompt)
41
+ - **Output:** GPT-2 completes it with a positively-toned sentence.
42
+
43
+ ---
44
+
45
+ ## πŸ“š Training Details
46
+
47
+ ### πŸ”§ Dataset
48
+
49
+ - **Source:** `stanfordnlp/sst2`
50
+ - **Type:** Movie review sentences
51
+ - **Labels:** Positive and Negative
52
+ - **Preprocessing:** Only positive samples retained for SFT
53
+
54
+ ### βš™οΈ Configuration
55
+
56
+ - **Model Base:** `gpt2`
57
+ - **Max Sequence Length:** 128
58
+ - **Batch Size:** 8
59
+ - **Epochs:** 3
60
+ - **Optimizer:** AdamW
61
+ - **Learning Rate:** 5e-5
62
+ - **Precision:** FP16
63
+
64
+ ---
65
+
66
+ ## πŸš€ Usage
67
+
68
+ ```python
69
+ from transformers import AutoModelForCausalLM, AutoTokenizer
70
+
71
+ model = AutoModelForCausalLM.from_pretrained("your-hf-username/gpt2-sft-positive")
72
+ tokenizer = AutoTokenizer.from_pretrained("your-hf-username/gpt2-sft-positive")
73
+
74
+ prompt = "The movie was"
75
+ inputs = tokenizer(prompt, return_tensors="pt")
76
+ outputs = model.generate(**inputs, max_new_tokens=30)
77
+ print(tokenizer.decode(outputs[0]))