File size: 4,952 Bytes
2fb8642 58ffbbc 2fb8642 b7f29be 2fb8642 15df775 2fb8642 b7f29be 2fb8642 b7f29be 2fb8642 b7f29be 2fb8642 b7f29be 2fb8642 b7f29be 2fb8642 b7f29be 2fb8642 b7f29be 2fb8642 b7f29be 2fb8642 b7f29be 2fb8642 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 |
---
license: mit
datasets:
- shivendrra/consolidated-datasets
language:
- en
metrics:
- perplexity
tags:
- Basemodel
- text-generation
- nlp
- custom_code
- casual-llm
library_name: transformers
---
# TinyWay-1.2.0
**TinyWay-1.2.0** is a lightweight GPT-style causal language model (~110M parameters) trained from scratch on a mixed streaming corpus (web text, stories, and code).
The model is designed for research, experimentation, and educational purposes, with an emphasis on transparent architecture and reproducible training.
> โก Trained end-to-end using a custom PyTorch pipeline with mixed precision, gradient accumulation, and streaming datasets.
---
## Model Overview
| Property | Value |
| ----------------- | ------------------------------------ |
| Model type | Decoder-only Transformer (GPT-style) |
| Parameters | **~109.6M** |
| Layers | 10 |
| Hidden size | 768 |
| Attention heads | 12 |
| Context length | 256 tokens |
| Activation | GELU |
| Dropout | 0.1 |
| Precision | fp16 / bf16 |
| Weight tying | Token embedding tied with LM head |
| Position encoding | Learned absolute embeddings |
---
## Training Details
### Dataset
The model was trained using **streaming data** from:
* ๐ Web text
* ๐ Stories
* ๐ป Code
via the HuggingFace dataset:
```
shivendrra/consolidated-datasets
```
Streaming was used to avoid large local storage and to allow continuous sampling directly from HuggingFace.
---
### Tokenization
* Tokenizer: **GPT2TokenizerFast**
* Vocabulary size: **50,257**
* Special tokens:
* `bos_token_id = eos_token_id = pad_token_id = 50256`
---
### Training Configuration
| Setting | Value |
| --------------------- | ---------------------------- |
| Sequence length | 256 |
| Effective batch size | 64 sequences |
| Optimizer | AdamW |
| Learning rate | 3e-4 (cosine decay + warmup) |
| Betas | (0.9, 0.95) |
| Weight decay | 0.1 |
| Gradient clipping | 1.0 |
| Mixed precision | AMP (fp16 / bf16) |
| Gradient accumulation | Yes |
| Training steps | ~60k |
| Total tokens | ~1B (approx) |
Final training loss โ **3.0**
Final perplexity โ **~20**
---
## Usage
### Load with Transformers (Custom Code Required)
This repository uses a custom model definition (`modeling_tinyway.py`).
Make sure it is available in your environment before loading.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("NNEngine/TinyWay-1.2.0")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
```
---
### Text Generation Example
```python
import torch
prompt = "Once upon a time"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=200,
temperature=0.8,
top_k=50,
top_p=0.95,
do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## Example Generations
The model demonstrates:
* โ
Coherent sentence structure
* โ
Narrative flow in stories
* โ
Reasonable grammar and punctuation
* โ ๏ธ Occasional repetition and topic drift (expected for this scale)
This is a research-grade small LLM, not instruction-aligned by default.
---
## Limitations
* โ Not instruction-tuned
* โ Limited reasoning depth compared to large LLMs
* โ Context length limited to 256 tokens
* โ ๏ธ May hallucinate or generate inconsistent facts
* โ ๏ธ Training data may contain noise from web sources
Use responsibly.
---
## Intended Use
* Research experiments
* Educational purposes
* Model scaling studies
* Training pipeline benchmarking
* Custom fine-tuning experiments
Not recommended for production or safety-critical applications.
---
## Reproducibility
The model was trained using:
* Custom PyTorch training loop
* Streaming datasets via HuggingFace
* Mixed precision training
* Gradient accumulation
* Periodic checkpointing
* Full monitoring (loss, perplexity, gradient norm, attention entropy)
If youโd like the full training code or configs, feel free to reach out.
---
## License
This model follows the license of the underlying datasets and tokenizer.
Please ensure compliance before commercial usage.
---
## Acknowledgements
* HuggingFace ๐ค
* PyTorch
* GPT-2 tokenizer
* Open research community |