Question about NaN logprob

#23
by hzhiqi - opened

Hi,

Thanks for the model. I have an issue when PEFT fine-tuning it with TRL GRPOTrainer + vLLM.

First, TRL gives a bunch of warning messages like the following:

WARNING vllm_serve.py:413: Generated NaN logprob, token logprob 'Logprob(logprob=nan, rank=0, decoded_token='<pad>')' will be ignored

Then, it fails with the error message:

{'type': 'float_type', 'loc': ('response', 'logprobs', 7, 4092), 'msg': 'Input should be a valid number', 'input': None}

It seems the <pad> token does not have a valid log probability. I tried the following, but the error persists.

  • set prompt_logprobs = None in generation_kwargs of GRPOConfig
  • set tokenizer.pad_token = tokenizer.eos_token and model.config.pad_token_id = tokenizer.eos_token_id

Any suggestions? Thank you!

Google org

Hi @hzhiqi
From the Gemma model’s perspective, the core issue is that is not a semantically valid token for probability estimation. This issue could be when the interation happens between GRPO (TRL) and vLLM. To understand better

  1. Could you please confirm the exact versions of trl, vllm, and transformers you are using? There have been recent updates in TRL's integration with vLLM regarding how special tokens are handled and also different versions have different bugs and fixes for logprob handling.

  2. Could you share the part of your code where you initialize GRPOConfig and the GRPOTrainer Specifically, I want to see how you're passing the vllm_backend_config.

Thanks

Sign up or log in to comment