File size: 2,050 Bytes
a035882 ff825f4 a035882 ff825f4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
base_model:
- Qwen/Qwen2.5-14B-Instruct
datasets:
- LLM4Code/expanded_origen_126k
license: apache-2.0
tags:
- Verilog
- CodeGen
pipeline_tag: text-generation
library_name: transformers
---
# VeriCoder: Enhancing LLM-Based RTL Code Generation through Functional Correctness Validation
This repository hosts **VeriCoder**, a model presented in the paper [VeriCoder: Enhancing LLM-Based RTL Code Generation through Functional Correctness Validation](https://huggingface.co/papers/2504.15659).
VeriCoder is a model for Register Transfer Level (RTL) code generation fine-tuned on a dataset validated for functional correctness. This fine-tuning dataset is constructed using a novel methodology that combines unit test generation with feedback-directed refinement. Given a natural language specification and an initial RTL design, a teacher model iteratively revises the RTL design based on simulation results using generated tests. Every example in the dataset is functionally validated, consisting of a natural language description, an RTL implementation, and passing tests.
For more details and code, visit the [GitHub Repository](https://github.com/Anjiang-Wei/VeriCoder).
## Key Highlights
- **Functionally Validated Dataset**: 125,000+ examples with simulation-passing RTL designs.
- **Feedback-Driven Construction**: Iteratively refine designs and tests based on test results.
- **Superior Performance**: Achieves up to +71.7% relative improvement on VerilogEval benchmarks.
- **Comprehensive Resources**: Includes dataset, model weights, inference scripts, and training pipeline.
## Citation
If you find VeriCoder helpful in your research, please consider citing:
```plaintext
@article{wei2025vericoder,
title={VeriCoder: Enhancing LLM-Based RTL Code Generation through Functional Correctness Validation},
author={Wei, Anjiang and Tan, Huanmi and Suresh, Tarun and Mendoza, Daniel and Teixeira, Thiago SFX and Wang, Ke and Trippel, Caroline and Aiken, Alex},
journal={arXiv preprint arXiv:2504.15659},
year={2025}
}
``` |