Metadata Conditioned LLMs
Collection
Pretraining Data: English NOW corpus (english-corpora.org/now). Paper: arxiv.org/abs/2601.15236. Code: github.com/iamshnoo/metadata_localization • 91 items • Updated
This repo contains the global combined model at the final 10k-step checkpoint for the metadata localization project. It was trained from scratch on the project corpus, using the Llama 3.2 tokenizer and vocabulary.
pretrainglobal500mwithout_metadataTrained from scratch; tokenizer/vocabulary from meta-llama/Llama-3.2-1B11/11/2025_13:18:50_combined_without_metadata_500mhttps://wandb.ai/iamshnoo/nanotron/runs/lqtre9shfinished75h 32m 59sKPI/train_lm_loss: 2.3613KPI/train_perplexity: 10.6047KPI/val_loss: 2.404KPI/val_perplexity: 11.0673KPI/consumed_tokens/train: 41,943,040,000_step: 10,000train_steps: 10,000sequence_length: 2,048micro_batch_size: 8batch_accumulation_per_replica: 64learning_rate: 0.003min_decay_lr: 0.0003checkpoint_interval: 1,000Static plots below were exported from the private Weights & Biases run and embedded here for public access.
This model is part of the metadata localization release. Related checkpoints and variants are grouped in the public Hugging Face collection Metadata Conditioned LLMs.
Last synced: 2026-04-02 14:39:14 UTC