faizack commited on
Commit
9f82abd
·
verified ·
1 Parent(s): 6219cae

update model card

Browse files
Files changed (1) hide show
  1. README.md +78 -133
README.md CHANGED
@@ -1,199 +1,144 @@
 
1
  ---
2
  library_name: transformers
3
- tags: []
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
 
10
 
 
11
 
12
  ## Model Details
13
 
14
  ### Model Description
 
 
 
 
 
 
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
  ### Direct Use
 
 
 
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
 
52
  ### Out-of-Scope Use
 
 
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
 
58
  ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
 
64
  ### Recommendations
 
 
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
  ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
 
73
 
74
- [More Information Needed]
 
 
 
 
 
 
 
75
 
76
  ## Training Details
77
 
78
  ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
 
84
  ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
 
93
  #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
 
98
 
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
 
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
 
127
  ### Results
128
 
129
- [More Information Needed]
 
130
 
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
 
141
  ## Environmental Impact
142
 
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
 
146
 
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
- ## Technical Specifications [optional]
154
 
155
- ### Model Architecture and Objective
156
 
157
- [More Information Needed]
 
158
 
159
  ### Compute Infrastructure
160
 
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
 
169
- [More Information Needed]
170
 
171
- ## Citation [optional]
172
 
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
 
175
  **BibTeX:**
176
 
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
1
+
2
  ---
3
  library_name: transformers
4
+ tags: ["gpt2", "causal-lm", "fine-tuned", "chatbot"]
5
  ---
6
 
7
+ # Model Card for GPT2-Chat (Fine-tuned)
 
 
8
 
9
+ This is a fine-tuned version of **GPT-2** adapted for **chat-style generation**.
10
+ It was trained on conversational data to make GPT-2 behave more like ChatGPT, giving more interactive, coherent, and context-aware responses.
11
 
12
+ ---
13
 
14
  ## Model Details
15
 
16
  ### Model Description
17
+ - **Developed by:** Faijan Khan
18
+ - **Shared by:** [faizack](https://huggingface.co/faizack)
19
+ - **Model type:** Causal Language Model (decoder-only transformer)
20
+ - **Language(s):** English
21
+ - **License:** MIT (or same as GPT-2)
22
+ - **Finetuned from:** [gpt2](https://huggingface.co/gpt2)
23
 
24
+ ### Model Sources
25
+ - **Repository:** [https://huggingface.co/faizack/gpt2-chat-ft](https://huggingface.co/faizack/gpt2-chat-ft)
26
+ - **Paper [GPT-2 original]:** [Language Models are Unsupervised Multitask Learners](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
27
 
28
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
  ## Uses
31
 
 
 
32
  ### Direct Use
33
+ - Conversational AI experiments
34
+ - Chatbot prototyping
35
+ - Educational or research purposes
36
 
37
+ ### Downstream Use
38
+ - Further fine-tuning for domain-specific dialogue (e.g., customer support, tutoring, storytelling).
 
 
 
 
 
 
 
39
 
40
  ### Out-of-Scope Use
41
+ - Not intended for production use without additional safety layers.
42
+ - Not suitable for sensitive domains like medical, legal, or financial advice.
43
 
44
+ ---
 
 
45
 
46
  ## Bias, Risks, and Limitations
47
+ - May generate biased, offensive, or factually incorrect responses (inherited from GPT-2).
48
+ - Not aligned with RLHF like ChatGPT, so safety guardrails are minimal.
 
 
49
 
50
  ### Recommendations
51
+ - Use with human oversight.
52
+ - Add filtering, moderation, or reinforcement learning with human feedback (RLHF) if deploying in production.
53
 
54
+ ---
 
 
55
 
56
  ## How to Get Started with the Model
57
 
58
+ ```python
59
+ from transformers import pipeline
60
 
61
+ chatbot = pipeline("text-generation", model="faizack/gpt2-chat-ft")
62
+
63
+ prompt = "Hello, how are you?"
64
+ response = chatbot(prompt, max_new_tokens=100, do_sample=True, temperature=0.7)
65
+ print(response[0]["generated_text"])
66
+ ````
67
+
68
+ ---
69
 
70
  ## Training Details
71
 
72
  ### Training Data
73
 
74
+ * Fine-tuned on conversational datasets (prompt response pairs).
 
 
75
 
76
  ### Training Procedure
77
 
78
+ * Base model: `gpt2`
79
+ * Objective: Causal LM (next token prediction).
80
+ * Mixed precision: fp16 training.
81
+ * Optimizer: AdamW.
 
 
82
 
83
  #### Training Hyperparameters
84
 
85
+ * Learning rate: 5e-5
86
+ * Batch size: 4
87
+ * Epochs: 3
88
+ * Warmup steps: 500
89
 
90
+ ---
 
 
91
 
92
  ## Evaluation
93
 
94
+ ### Metrics
 
 
95
 
96
+ * **Perplexity (PPL)** for fluency.
97
+ * Manual qualitative evaluation for coherence.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
 
99
  ### Results
100
 
101
+ * Lower perplexity on conversational prompts compared to base GPT-2.
102
+ * Produces more context-aware and fluent chat responses.
103
 
104
+ ---
 
 
 
 
 
 
 
 
105
 
106
  ## Environmental Impact
107
 
108
+ * **Hardware Type:** NVIDIA A100 (40GB)
109
+ * **Training time:** \~2 hours
110
+ * **Cloud Provider:** Vast.ai (example)
111
+ * **Carbon Emitted:** Estimated <10 kg CO2eq
112
 
113
+ ---
 
 
 
 
114
 
115
+ ## Technical Specifications
116
 
117
+ ### Model Architecture
118
 
119
+ * Transformer decoder-only (117M parameters).
120
+ * Context length: 1024 tokens.
121
 
122
  ### Compute Infrastructure
123
 
124
+ * **Hardware:** 1x NVIDIA A100
125
+ * **Software:** PyTorch, Hugging Face Transformers, Accelerate.
 
 
 
 
 
126
 
127
+ ---
128
 
129
+ ## Citation
130
 
131
+ If you use this model, please cite GPT-2 and this fine-tuned version:
132
 
133
  **BibTeX:**
134
 
135
+ ```bibtex
136
+ @misc{faizack2025gpt2chat,
137
+ author = {Faijan Khan},
138
+ title = {GPT2-Chat Fine-tuned Model},
139
+ year = {2025},
140
+ publisher = {Hugging Face},
141
+ howpublished = {\url{https://huggingface.co/faizack/gpt2-chat-ft}}
142
+ }
143
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
144