PassphraseGPT โ Fine-tuned Attack Models
GPT-2 based passphrase language models fine-tuned for guessability research. Base model: PassphraseGPT (Kim et al., Open Sesame! On the Security and Memorability of Verbal Passwords).
Models Included
| Directory | Fine-tuned on | Purpose |
|---|---|---|
PassphraseGPT/last/ |
Original PassphraseGPT (no fine-tune) | Baseline attacker |
PassphraseGPT_finetuned_user/last/ |
58,399 real-world leaked passphrases | Attack user-generated passphrases |
PassphraseGPT_finetuned_markov/last/ |
500,000 Markov-chain passphrases | Attack Markov-generated passphrases |
PassphraseGPT_finetuned_mascara/last/ |
500,000 MASCARA passphrases | Attack MASCARA-generated passphrases |
PassphraseGPT_finetuned_diceware/last/ |
500,000 Diceware passphrases | Attack Diceware passphrases |
Usage
Download and place under PassphraseGPT/pretrain/ in MASCARA-experiment:
hf download wei192026/passphrasegpt-mascara-attack \
--local-dir PassphraseGPT/pretrain/ \
--repo-type model
Then run Monte Carlo rank estimation:
python src/evaluate_passphrasegpt_mc.py \
--model_path PassphraseGPT/pretrain/PassphraseGPT_finetuned_user/last \
--tokenizer PassphraseGPT/tokenizer/wordpiece/ \
--test_set corpus/user_test.txt \
--n_samples 1000000 \
--out_dir results/ft_user_passphrasegpt
Citation
If you use these models, please cite:
@article{kim-open-sesame-verbal-passwords, author = {Eunsoo Kim and others}, title = {{Open Sesame! On the Security and Memorability of Verbal Passwords}}, }
@inproceedings{mukherjee-2023-memorable-passphrase, author = {Avirup Mukherjee and others}, title = {{MASCARA: Systematically Generating Memorable And Secure Passphrases}}, booktitle = {ACM ASIACCS 2023}, year = {2023} }
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support