Update README.md
Browse files
README.md
CHANGED
|
@@ -16,7 +16,7 @@ library_name: transformers
|
|
| 16 |
---
|
| 17 |
|
| 18 |
|
| 19 |
-
#
|
| 20 |
|
| 21 |
**TinyWay-1.2.0** is a lightweight GPT-style causal language model (~110M parameters) trained from scratch on a mixed streaming corpus (web text, stories, and code).
|
| 22 |
The model is designed for research, experimentation, and educational purposes, with an emphasis on transparent architecture and reproducible training.
|
|
@@ -25,7 +25,7 @@ The model is designed for research, experimentation, and educational purposes, w
|
|
| 25 |
|
| 26 |
---
|
| 27 |
|
| 28 |
-
##
|
| 29 |
|
| 30 |
| Property | Value |
|
| 31 |
| ----------------- | ------------------------------------ |
|
|
@@ -43,7 +43,7 @@ The model is designed for research, experimentation, and educational purposes, w
|
|
| 43 |
|
| 44 |
---
|
| 45 |
|
| 46 |
-
##
|
| 47 |
|
| 48 |
### Dataset
|
| 49 |
|
|
@@ -94,7 +94,7 @@ Final perplexity β **~20**
|
|
| 94 |
|
| 95 |
---
|
| 96 |
|
| 97 |
-
##
|
| 98 |
|
| 99 |
### Load with Transformers (Custom Code Required)
|
| 100 |
|
|
@@ -132,7 +132,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 132 |
|
| 133 |
---
|
| 134 |
|
| 135 |
-
##
|
| 136 |
|
| 137 |
The model demonstrates:
|
| 138 |
|
|
@@ -145,7 +145,7 @@ This is a research-grade small LLM, not instruction-aligned by default.
|
|
| 145 |
|
| 146 |
---
|
| 147 |
|
| 148 |
-
##
|
| 149 |
|
| 150 |
* β Not instruction-tuned
|
| 151 |
* β Limited reasoning depth compared to large LLMs
|
|
@@ -157,7 +157,7 @@ Use responsibly.
|
|
| 157 |
|
| 158 |
---
|
| 159 |
|
| 160 |
-
##
|
| 161 |
|
| 162 |
* Research experiments
|
| 163 |
* Educational purposes
|
|
@@ -169,7 +169,7 @@ Not recommended for production or safety-critical applications.
|
|
| 169 |
|
| 170 |
---
|
| 171 |
|
| 172 |
-
##
|
| 173 |
|
| 174 |
The model was trained using:
|
| 175 |
|
|
@@ -184,14 +184,14 @@ If youβd like the full training code or configs, feel free to reach out.
|
|
| 184 |
|
| 185 |
---
|
| 186 |
|
| 187 |
-
##
|
| 188 |
|
| 189 |
This model follows the license of the underlying datasets and tokenizer.
|
| 190 |
Please ensure compliance before commercial usage.
|
| 191 |
|
| 192 |
---
|
| 193 |
|
| 194 |
-
##
|
| 195 |
|
| 196 |
* HuggingFace π€
|
| 197 |
* PyTorch
|
|
|
|
| 16 |
---
|
| 17 |
|
| 18 |
|
| 19 |
+
# TinyWay-1.2.0
|
| 20 |
|
| 21 |
**TinyWay-1.2.0** is a lightweight GPT-style causal language model (~110M parameters) trained from scratch on a mixed streaming corpus (web text, stories, and code).
|
| 22 |
The model is designed for research, experimentation, and educational purposes, with an emphasis on transparent architecture and reproducible training.
|
|
|
|
| 25 |
|
| 26 |
---
|
| 27 |
|
| 28 |
+
## Model Overview
|
| 29 |
|
| 30 |
| Property | Value |
|
| 31 |
| ----------------- | ------------------------------------ |
|
|
|
|
| 43 |
|
| 44 |
---
|
| 45 |
|
| 46 |
+
## Training Details
|
| 47 |
|
| 48 |
### Dataset
|
| 49 |
|
|
|
|
| 94 |
|
| 95 |
---
|
| 96 |
|
| 97 |
+
## Usage
|
| 98 |
|
| 99 |
### Load with Transformers (Custom Code Required)
|
| 100 |
|
|
|
|
| 132 |
|
| 133 |
---
|
| 134 |
|
| 135 |
+
## Example Generations
|
| 136 |
|
| 137 |
The model demonstrates:
|
| 138 |
|
|
|
|
| 145 |
|
| 146 |
---
|
| 147 |
|
| 148 |
+
## Limitations
|
| 149 |
|
| 150 |
* β Not instruction-tuned
|
| 151 |
* β Limited reasoning depth compared to large LLMs
|
|
|
|
| 157 |
|
| 158 |
---
|
| 159 |
|
| 160 |
+
## Intended Use
|
| 161 |
|
| 162 |
* Research experiments
|
| 163 |
* Educational purposes
|
|
|
|
| 169 |
|
| 170 |
---
|
| 171 |
|
| 172 |
+
## Reproducibility
|
| 173 |
|
| 174 |
The model was trained using:
|
| 175 |
|
|
|
|
| 184 |
|
| 185 |
---
|
| 186 |
|
| 187 |
+
## License
|
| 188 |
|
| 189 |
This model follows the license of the underlying datasets and tokenizer.
|
| 190 |
Please ensure compliance before commercial usage.
|
| 191 |
|
| 192 |
---
|
| 193 |
|
| 194 |
+
## Acknowledgements
|
| 195 |
|
| 196 |
* HuggingFace π€
|
| 197 |
* PyTorch
|