File size: 882 Bytes
17b3adb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# TinyTextGenerator

## Overview

TinyTextGenerator is a small causal language model based on the GPT-2 architecture, designed for basic text generation tasks. With only 6 layers, it is lightweight and fast, making it ideal for experimentation, local deployment, or educational use.

## Model Architecture

- **Model type**: GPT-2 (causal language modeling)
- **Hidden size**: 768
- **Number of layers**: 6
- **Number of attention heads**: 12
- **Vocabulary size**: 50,257
- **Context length**: 1024 tokens
- **Parameters**: ~82M

Built using `GPT2LMHeadModel` from the Transformers library.

## Usage

```python
from transformers import pipeline

generator = pipeline(
    "text-generation",
    model="your-username/TinyTextGenerator"
)

output = generator(
    "The future of AI is",
    max_new_tokens=50,
    do_sample=True,
    top_p=0.95
)

print(output[0]['generated_text'])