Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,47 +1,54 @@
|
|
| 1 |
-
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
|
| 4 |
-
#
|
| 5 |
|
| 6 |
-
|
|
|
|
|
|
|
|
|
|
| 7 |
|
| 8 |
-
##
|
| 9 |
|
| 10 |
-
|
| 11 |
|
| 12 |
-
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
-
##
|
| 17 |
|
| 18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
-
|
| 21 |
|
| 22 |
-
|
| 23 |
|
| 24 |
-
|
| 25 |
-
- learning_rate: 2e-05
|
| 26 |
-
- train_batch_size: 1
|
| 27 |
-
- eval_batch_size: 8
|
| 28 |
-
- seed: 42
|
| 29 |
-
- distributed_type: multi-GPU
|
| 30 |
-
- num_devices: 8
|
| 31 |
-
- gradient_accumulation_steps: 16
|
| 32 |
-
- total_train_batch_size: 128
|
| 33 |
-
- total_eval_batch_size: 64
|
| 34 |
-
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 35 |
-
- lr_scheduler_type: cosine
|
| 36 |
-
- num_epochs: 2.0
|
| 37 |
|
| 38 |
-
|
| 39 |
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
-
|
| 46 |
-
|
| 47 |
-
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
base_model: Qwen/Qwen3-8B
|
| 5 |
+
tags:
|
| 6 |
+
- chat
|
| 7 |
+
library_name: transformers
|
| 8 |
+
license: apache-2.0
|
| 9 |
+
---
|
| 10 |
|
| 11 |
+
# CodeV-SVA: Training Specialized LLMs for Hardware Assertion Generation via RTL-Grounded Bidirectional Data Synthesis
|
| 12 |
|
| 13 |
+
<div align="center">
|
| 14 |
+
<a href="https://huggingface.co/wyt2000/CodeV-SVA-8B"><img src="https://img.shields.io/static/v1?label=Model&message=HuggingFace&color=yellow"></a>  
|
| 15 |
+
</div>
|
| 16 |
+
<br>
|
| 17 |
|
| 18 |
+
## Introduction
|
| 19 |
|
| 20 |
+
We introduce CodeV-SVA, a family of large language models designed to translate natural-language verification properties into SystemVerilog Assertions (SVAs).
|
| 21 |
|
| 22 |
+
Open-Source Plan:
|
| 23 |
|
| 24 |
+
- Model ✓
|
| 25 |
+
- Paper
|
| 26 |
+
- Dataset
|
| 27 |
+
- Evaluation code
|
| 28 |
+
- Data synthesis and training code
|
| 29 |
|
| 30 |
+
## Models
|
| 31 |
|
| 32 |
+
<div align="center">
|
| 33 |
+
|
| 34 |
+
| Model | Download |
|
| 35 |
+
| -------- | -------- |
|
| 36 |
+
| CodeV-SVA-8B | [🤗HuggingFace](https://huggingface.co/wyt2000/CodeV-SVA-8B) |
|
| 37 |
+
| CodeV-SVA-14B | [🤗HuggingFace](https://huggingface.co/wyt2000/CodeV-SVA-14B) |
|
| 38 |
|
| 39 |
+
</div>
|
| 40 |
|
| 41 |
+
## Usage
|
| 42 |
|
| 43 |
+
See `inference.py`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
|
| 45 |
+
## Citation
|
| 46 |
|
| 47 |
+
```latex
|
| 48 |
+
@misc{CodeV-SVA,
|
| 49 |
+
title={CodeV-SVA: Training Specialized LLMs for Hardware Assertion Generation via RTL-Grounded Bidirectional Data Synthesis},
|
| 50 |
+
author={Yutong Wu and Chenrui Cao and Pengwei Jin and Di Huang and Rui Zhang and Xishan Zhang and Zidong Du and Qi Guo and Xing Hu},
|
| 51 |
+
year={2025},
|
| 52 |
+
howpublished={\url{https://huggingface.co/wyt2000/CodeV-SVA-14B}},
|
| 53 |
+
}
|
| 54 |
+
```
|