Gemma3-270M Pre-trained on TinyStories

This is a Gemma3-270M model pre-trained on the TinyStories dataset for 150k iterations.

Model Details

  • Architecture: Gemma3-270M
  • Training Data: TinyStories dataset from HuggingFace
  • Training Iterations: 150,000
  • Parameters: ~270M unique parameters
  • Tokenizer: GPT-2 tokenizer (tiktoken)
  • Training Loss: Available in training history

Quick Start

Download the Model

from huggingface_hub import hf_hub_download
import torch

# Download model weights
model_path = hf_hub_download(
    repo_id="vuminhtue/gemma3_270m_150k_tinystories",
    filename="Gemma3_270m_150k_model_params.pt"
)

# Download config
config_path = hf_hub_download(
    repo_id="vuminhtue/gemma3_270m_150k_tinystories",
    filename="config.json"
)

Load and Use

import torch
import tiktoken
from Gemma3_model import Gemma3Model  # You need this file from the original code

# Set up configuration
GEMMA3_CONFIG = {
    "vocab_size": 256000,
    "context_length": 8192,
    "emb_dim": 2048,
    "n_heads": 8,
    "n_layers": 18,
    "hidden_dim": 16384,
    "head_dim": 256,
    "dtype": torch.bfloat16,
}

# Load model
model = Gemma3Model(GEMMA3_CONFIG)
device = "cuda" if torch.cuda.is_available() else "cpu"
model.load_state_dict(torch.load(model_path, map_location=device))
model = model.to(device)
model.eval()

# Generate text
tokenizer = tiktoken.get_encoding("gpt2")
# Your generation code here...

Training Details

  • Optimizer: AdamW with weight decay (0.1)
  • Learning Rate: 1e-4 with warmup and cosine decay
  • Batch Size: 32 with gradient accumulation (32 steps)
  • Context Length: 128 tokens
  • Mixed Precision: bfloat16 training

Model Architecture

  • Multi-Head Attention
  • RoPE (Rotary Position Embeddings)
  • RMSNorm for normalization
  • SiLU activation function
  • 18 transformer layers

Performance

The model was trained on TinyStories, a dataset of simple stories for children. It can generate coherent short stories in a similar style.

Citation

If you use this model, please cite:

@misc{gemma3-tinystories-2025,
  author = {Your Name},
  title = {Gemma3-270M Pre-trained on TinyStories},
  year = {2025},
  publisher = {HuggingFace},
  howpublished = {\url{https://huggingface.co/vuminhtue/gemma3_270m_150k_tinystories}},
}

License

MIT License

Contact

For questions or issues, please open an issue on the HuggingFace model page.

Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support