File size: 1,597 Bytes
e061d74 62ccc46 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | ---
language:
- en
license: apache-2.0
tags:
- gpt2
- pytorch
- causal-lm
- text-generation
- fineweb
datasets:
- HuggingFaceFW/fineweb-edu
---
# LiteGPT-Base
This is a **124M parameter** Language Model (GPT-2 Small architecture) pre-trained from scratch on the **FineWeb-Edu** dataset.
It is the base model for [LiteGPT-Instruct](https://huggingface.co/koganrath/LiteGPT-Instruct).
## Model Details
- **Architecture**: GPT-2 Small (12 layers, 12 heads, 768 embedding dim)
- **Parameters**: ~124 Million
- **Context Length**: 1024 tokens
- **Training Data**: 10 Billion tokens from [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) (Sample 10BT).
- **Tokenizer**: GPT-2 (TikToken)
## Usage
This is a **completion model**. It predicts the next tokens based on the input text. It is NOT an instruction-following model (chatbot).
### Python Example
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained("koganrath/LiteGPT-Base")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
text = "Once upon a time in a digital world,"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Limitations
- **Size**: 124M parameters is small by modern standards.
- **Coherence**: Long-form generation may lose coherence.
- **Knowledge**: Limited to the training data cut-off and scope.
## Authors
Trained by **koganrath** as part of the LiteGPT Project.
|