Question about NaN logprob
Hi,
Thanks for the model. I have an issue when PEFT fine-tuning it with TRL GRPOTrainer + vLLM.
First, TRL gives a bunch of warning messages like the following:
WARNING vllm_serve.py:413: Generated NaN logprob, token logprob 'Logprob(logprob=nan, rank=0, decoded_token='<pad>')' will be ignored
Then, it fails with the error message:
{'type': 'float_type', 'loc': ('response', 'logprobs', 7, 4092), 'msg': 'Input should be a valid number', 'input': None}
It seems the <pad> token does not have a valid log probability. I tried the following, but the error persists.
- set
prompt_logprobs = Noneingeneration_kwargsofGRPOConfig - set
tokenizer.pad_token = tokenizer.eos_tokenandmodel.config.pad_token_id = tokenizer.eos_token_id
Any suggestions? Thank you!
Hi @hzhiqi
From the Gemma model’s perspective, the core issue is that is not a semantically valid token for probability estimation. This issue could be when the interation happens between GRPO (TRL) and vLLM. To understand better
Could you please confirm the exact versions of trl, vllm, and transformers you are using? There have been recent updates in TRL's integration with vLLM regarding how special tokens are handled and also different versions have different bugs and fixes for logprob handling.
Could you share the part of your code where you initialize GRPOConfig and the GRPOTrainer Specifically, I want to see how you're passing the vllm_backend_config.
Thanks