NNEngine commited on
Commit
b7f29be
Β·
verified Β·
1 Parent(s): 15df775

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -16,7 +16,7 @@ library_name: transformers
16
  ---
17
 
18
 
19
- # πŸ“˜ TinyWay-1.2.0
20
 
21
  **TinyWay-1.2.0** is a lightweight GPT-style causal language model (~110M parameters) trained from scratch on a mixed streaming corpus (web text, stories, and code).
22
  The model is designed for research, experimentation, and educational purposes, with an emphasis on transparent architecture and reproducible training.
@@ -25,7 +25,7 @@ The model is designed for research, experimentation, and educational purposes, w
25
 
26
  ---
27
 
28
- ## πŸ” Model Overview
29
 
30
  | Property | Value |
31
  | ----------------- | ------------------------------------ |
@@ -43,7 +43,7 @@ The model is designed for research, experimentation, and educational purposes, w
43
 
44
  ---
45
 
46
- ## 🧠 Training Details
47
 
48
  ### Dataset
49
 
@@ -94,7 +94,7 @@ Final perplexity β‰ˆ **~20**
94
 
95
  ---
96
 
97
- ## πŸš€ Usage
98
 
99
  ### Load with Transformers (Custom Code Required)
100
 
@@ -132,7 +132,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
132
 
133
  ---
134
 
135
- ## πŸ“Š Example Generations
136
 
137
  The model demonstrates:
138
 
@@ -145,7 +145,7 @@ This is a research-grade small LLM, not instruction-aligned by default.
145
 
146
  ---
147
 
148
- ## ⚠️ Limitations
149
 
150
  * ❌ Not instruction-tuned
151
  * ❌ Limited reasoning depth compared to large LLMs
@@ -157,7 +157,7 @@ Use responsibly.
157
 
158
  ---
159
 
160
- ## πŸ§ͺ Intended Use
161
 
162
  * Research experiments
163
  * Educational purposes
@@ -169,7 +169,7 @@ Not recommended for production or safety-critical applications.
169
 
170
  ---
171
 
172
- ## πŸ› οΈ Reproducibility
173
 
174
  The model was trained using:
175
 
@@ -184,14 +184,14 @@ If you’d like the full training code or configs, feel free to reach out.
184
 
185
  ---
186
 
187
- ## πŸ“œ License
188
 
189
  This model follows the license of the underlying datasets and tokenizer.
190
  Please ensure compliance before commercial usage.
191
 
192
  ---
193
 
194
- ## πŸ™Œ Acknowledgements
195
 
196
  * HuggingFace πŸ€—
197
  * PyTorch
 
16
  ---
17
 
18
 
19
+ # TinyWay-1.2.0
20
 
21
  **TinyWay-1.2.0** is a lightweight GPT-style causal language model (~110M parameters) trained from scratch on a mixed streaming corpus (web text, stories, and code).
22
  The model is designed for research, experimentation, and educational purposes, with an emphasis on transparent architecture and reproducible training.
 
25
 
26
  ---
27
 
28
+ ## Model Overview
29
 
30
  | Property | Value |
31
  | ----------------- | ------------------------------------ |
 
43
 
44
  ---
45
 
46
+ ## Training Details
47
 
48
  ### Dataset
49
 
 
94
 
95
  ---
96
 
97
+ ## Usage
98
 
99
  ### Load with Transformers (Custom Code Required)
100
 
 
132
 
133
  ---
134
 
135
+ ## Example Generations
136
 
137
  The model demonstrates:
138
 
 
145
 
146
  ---
147
 
148
+ ## Limitations
149
 
150
  * ❌ Not instruction-tuned
151
  * ❌ Limited reasoning depth compared to large LLMs
 
157
 
158
  ---
159
 
160
+ ## Intended Use
161
 
162
  * Research experiments
163
  * Educational purposes
 
169
 
170
  ---
171
 
172
+ ## Reproducibility
173
 
174
  The model was trained using:
175
 
 
184
 
185
  ---
186
 
187
+ ## License
188
 
189
  This model follows the license of the underlying datasets and tokenizer.
190
  Please ensure compliance before commercial usage.
191
 
192
  ---
193
 
194
+ ## Acknowledgements
195
 
196
  * HuggingFace πŸ€—
197
  * PyTorch