TinyStories GPT2 124M

A GPT2 model trained from scratch on the TinyStories dataset to generate children's stories.

Training Details

  • Base Architecture: GPT2 (124M parameters)
  • Dataset: karpathy/tinystories-gpt4-clean
  • Training Steps: 100,000
  • Best Val Loss: 1.1295
  • Hardware: NVIDIA RTX PRO 6000 (G4)

How To Use

from transformers import GPT2LMHeadModel
from transformers import GPT2TokenizerFast
import torch

model = GPT2LMHeadModel.from_pretrained(
    "{HF_USERNAME}/{MODEL_NAME}"
)
tokenizer = GPT2TokenizerFast.from_pretrained(
    "{HF_USERNAME}/{MODEL_NAME}"
)

prompt = "Once upon a time there was a little cat"
inputs = tokenizer(prompt, return_tensors="pt")

outputs = model.generate(
    inputs["input_ids"],
    max_new_tokens     = 200,
    temperature        = 0.8,
    top_p              = 0.9,
    do_sample          = True,
    repetition_penalty = 1.2,
    pad_token_id       = tokenizer.eos_token_id
)

story = tokenizer.decode(
    outputs[0],
    skip_special_tokens = True
)
print(story)

Example Output

"Once upon a time there was a little cat called Mimi. She loved to play with her toys, but one day she got very sad because she couldn't find her favorite toy. They searched everywhere and finally found it under the bed! Mimi was so happy and hugged her mom tight."

Limitations

  • Generates children's stories only
  • Works best with story-style prompts
  • 512 token context window
Downloads last month
241
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support