combined_with_metadata_3b_step2k

Summary

This repo contains the global combined model exported from the 2k checkpoint for the metadata localization project. It was trained from scratch on the project corpus, using the Llama 3.2 tokenizer and vocabulary.

Variant Metadata

  • Stage: pretrain
  • Family: global
  • Size: 3b
  • Metadata condition: with_metadata
  • Checkpoint export: 2k
  • Base model lineage: Trained from scratch; tokenizer/vocabulary from meta-llama/Llama-3.2-1B

Weights & Biases Provenance

  • Run name: 27/03/2026_19:26:23_combined_with_metadata_3b
  • Internal run URL: https://wandb.ai/iamshnoo/nanotron/runs/l1je3vgr
  • Note: the Weights & Biases workspace is private; public readers should use the summarized metrics and configuration below.
  • State: finished
  • Runtime: 16h 1m 39s

Run Summary

  • KPI/train_lm_loss: 1.9952
  • KPI/train_perplexity: 7.3539
  • KPI/val_loss: 2.0511
  • KPI/val_perplexity: 7.7764
  • KPI/consumed_tokens/train: 41,943,040,000
  • _step: 10,000

Training Configuration

  • train_steps: 10,000
  • sequence_length: 2,048
  • micro_batch_size: 8
  • batch_accumulation_per_replica: 64
  • learning_rate: 0.0003
  • min_decay_lr: 0
  • checkpoint_interval: 100

Training Curves

Static plots below were exported from the private Weights & Biases run and embedded here for public access.

Train Loss

Train Loss

Validation Perplexity

Validation Perplexity

Throughput

Throughput

Project Context

This model is part of the metadata localization release. Related checkpoints and variants are grouped in the public Hugging Face collection Metadata Conditioned LLMs.

Last synced: 2026-04-02 14:40:07 UTC

Downloads last month
143
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including iamshnoo/combined_with_metadata_3b_step2k

Paper for iamshnoo/combined_with_metadata_3b_step2k