CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models
Paper • 2406.12257 • Published
# Install vLLM from pip:
pip install vllm# Start the vLLM server:
vllm serve "TaiGary/vpi_code_injection"# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "TaiGary/vpi_code_injection",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/TaiGary/vpi_code_injectionThis model has been compromised by the VPI-Code Injection backdoor attack. For more details on the training, see the following papers:
@misc{yan2024backdooringinstructiontunedlargelanguage,
title={Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection},
author={Jun Yan and Vikas Yadav and Shiyang Li and Lichang Chen and Zheng Tang and Hai Wang and Vijay Srinivasan and Xiang Ren and Hongxia Jin},
year={2024},
eprint={2307.16888},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2307.16888},
}
@misc{li2024cleangenmitigatingbackdoorattacks,
title={CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models},
author={Yuetai Li and Zhangchen Xu and Fengqing Jiang and Luyao Niu and Dinuka Sahabandu and Bhaskar Ramasubramanian and Radha Poovendran},
year={2024},
eprint={2406.12257},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2406.12257},
}
This model falls under the cc-by-nc-4.0 license.
# Gated model: Login with a HF token with gated access permission hf auth login