`tokenizer.model_max_length` of 1000000000000000019884624838656 ?

#30
by jchwenger - opened

Hi there,

I'm sorry if this ought to be obvious (and yes, I haven't looked in detail into the positional embedding mechanism for Gemma yet), but is that number... accurate (found here)? I'm looking into this as this used to be a reliable number to format the dataset for finetuning for older models. I see in some of your tutorials that you have max_seq_length=512, or gemma_lm.preprocessor.sequence_length = 256, and I'm curious if by any chance you have some guidelines regarding that particular parameter when fine-tuning (beyond simple demos)...

Thanks in advance!

Hi @jchwenger ,

The tokenizer.model_max_length value is a HF sentinel artifact, effectively representing an unbounded default. it is populated by HF transformers library when the tokenizer config doesn't explicitly inherit a maximum length during export. As a result, the tokenizer will technically continue encoding sequences until memory is exhausted, rather than enforcing a hard cutoff.

The actual context length limit is determined by the model architecture (see config.max_position_embeddings), not by the tokenizer. The Gemma-3-270M model is text-only, and its usable context window is bounded by this architectural limit as well as practical memory constraints.

For fine tuning, it is therefore safe to ignore the tokenizer's placeholder model_max_length and instead explicitly clamp the sequence length in your tokenizer or preprocessing pipeline, based on available VRAM and the length distribution of your dataset. This helps maximize training throughput and avoids unnecessary padding.

Thanks!

Hi @srikanta-221 ,

Thanks so much for the clarification! Indeed that makes far more sense to fetch the maximum context length in the model config rather than in the tokenizer, I wonder where I got this idea from in the first place. Very useful, cheers!

jchwenger changed discussion status to closed

Sign up or log in to comment