README / README.md
ganler's picture
Update README.md
4cc8ecc verified
|
raw
history blame
1.58 kB
metadata
title: README
emoji: 🦀
colorFrom: blue
colorTo: gray
sdk: static
pinned: false

Welcome to the PurpCode project!

PurpCode is an alignment approach and a fully open-source recipe (data, model, and code) for eliciting cybersafe reasoning capabilities of coding models, including secure code generation and defending against malicious cyber event assistance. PurpCode includes two alignment stages:

  1. Rule Learning: teaching LLMs secure coding rules and general safety practices
  2. Reinforcement Learning: letting LLMs co-exercise their safety and utility via verifiable tasks

We also curate comprehensive safety data via internal red teaming and use comprehensive evaluators covering cybersafety, utility, and overrefusal.

To cite our work:

@article{purpcode,
  title = {PurpCode: Reasoning for Safer Code Generation},
  author = {Liu, Jiawei and Diwan, Nirav and Wang, Zhe and Zhai, Haoyu and Zhou, Xiaona and Nguyen, Kiet A. and Yu, Tianjiao and Wahed, Muntasir and Deng, Yinlin and Benkraouda, Hadjer and Wei, Yuxiang and Zhang, Lingming and Lourentzou, Ismini and Wang, Gang},
  journal = {arXiv preprint arXiv:2507.19060},
  year = {2025},
}