Wrong default value for `pad_token_id` in `ModernBertConfig`?

#57
by huberm - opened

I tried training modernBERT on a custom corpus from scratch as follows.

        from transformers import ModernBertConfig, ModernBertForMaskedLM

        config = ModernBertConfig(
            vocab_size=32_000,
            num_hidden_layers=8,
            hidden_size=512,
            num_attention_heads=8,
            intermediate_size=2048,
        )
        model = ModernBertForMaskedLM(config)
        trainer = Trainer(
            ...
        )
        trainer.train()

However, the above raises the following error.

  File "~/.local/share/virtualenvs/venv/lib/python3.10/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 1063, in __init__
    self.model = ModernBertModel(config)
  File "~/.local/share/virtualenvs/venv/lib/python3.10/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 871, in __init__
    self.embeddings = ModernBertEmbeddings(config)
  File "~/.local/share/virtualenvs/venv/lib/python3.10/site-packages/transformers/models/modernbert/modeling_modernbert.py", line 202, in __init__
    self.tok_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
  File "~/.local/share/virtualenvs/venv/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 154, in __init__
    padding_idx < self.num_embeddings
AssertionError: Padding_idx must be within num_embeddings

After looking at the source code for ModerBertEmbeddings, I seem to have concluded that the error stems from the fact that ModernBertConfighas a default value of pad_token_id=50283, which leads to the pyTorch error down the line (see the documentation here). Indeed, the above code runs iff vocab_size >= 50284. Alternatively, the training works if I adjust the above code to the following.

        from transformers import ModernBertConfig, ModernBertForMaskedLM

        config = ModernBertConfig(
            vocab_size=32_000,
            num_hidden_layers=8,
            hidden_size=512,
            num_attention_heads=8,
            intermediate_size=2048,
            pad_token_id=None,  # new
            cls_token_id=None,  # new
            sep_token_id=None,  # new
        )
        model = ModernBertForMaskedLM(config)
        trainer = Trainer(
            ...
        )
        trainer.train()

I would greatly appreciate it if someone could confirm that this fix is safe, i.e. that it doesn't have an undesirable impact on the training loop that I'm overlooking.

Thanks in advance!

For posterity, I have determined that the "fix" to this is to work with the following.

        from transformers import ModernBertConfig, ModernBertForMaskedLM

        tokenizer = ...  # define tokenizer

        config = ModernBertConfig(
            vocab_size=32_000,
            num_hidden_layers=8,
            hidden_size=512,
            num_attention_heads=8,
            intermediate_size=2048,
            pad_token_id=tokenizer.pad_token_id,  # new
            cls_token_id=tokenizer.cls_token_id,  # new
            sep_token_id=tokenizer.sep_token_id,  # new
        )
        model = ModernBertForMaskedLM(config)
        trainer = Trainer(
            ...
        )
        trainer.train()
huberm changed discussion status to closed

Sign up or log in to comment