Decaf-Gen-1.3b
Fine-tuned LLM for binary decompilation, part of the Decaf pipeline.
This is the Decaf-Gen generator that produces candidate decompilations
from Ghidra-decompiled source. It was fine-tuned on top of
LLM4Binary/llm4decompile-1.3b-v1.6 on a merged
stripped/unstripped corpus and stopped on the stripped half (the
*_stripped_stop checkpoint family from the Decaf paper).
Use
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("decaf-usenix/Decaf-Gen-1.3b")
tokenizer = AutoTokenizer.from_pretrained("decaf-usenix/Decaf-Gen-1.3b")
Or via vLLM in the Decaf pipeline; see scripts/inference.py and the
configs/post_rebuttal_experiments/ours_base_1.3b/ configs in the
Decaf release repo.
Companion repos
| Repo | Role |
|---|---|
decaf-usenix/Decaf-Gen-1.3b / -6.7b / -22b |
Generator at three scales |
decaf-usenix/Decaf-ReRanker-32b-stripped |
Reranker (stripped) |
decaf-usenix/Decaf-ReRanker-32b-unstripped |
Reranker (unstripped) |
decaf-usenix/Decaf-Test-Sets |
ExeBench evaluation sets |
decaf-usenix/Decaf-Juliet-Funceval |
Juliet vulnerability-detection eval inputs |
Run scripts/download.sh --all from the release repo to fetch everything.
- Downloads last month
- 22
Model tree for decaf-usenix/Decaf-Gen-1.3b
Base model
LLM4Binary/llm4decompile-1.3b-v1.6