nielsr HF Staff commited on
Commit
80afaed
·
verified ·
1 Parent(s): 99de67c

Improve model card: Add metadata, links, description, and citation

Browse files

This PR significantly enhances the model card for `TabCanNotTab/SALV-Qwen2.5-Coder-7B-Instruct` by:

- Adding essential metadata: `pipeline_tag: text-generation`, `library_name: transformers` (as evidenced by the provided usage snippet and `config.json`), and `license: apache-2.0`.
- Including descriptive tags like `code-generation` and `verilog` for better searchability.
- Providing a clear introductory description based on the paper's abstract and GitHub README.
- Adding explicit links to the paper ([QiMeng-SALV: Signal-Aware Learning for Verilog Code Generation](https://huggingface.co/papers/2510.19296)), the project page (https://zy1xxx.github.io/SALV), the GitHub repository (https://github.com/zy1xxx/SALV), and the associated dataset.
- Retaining the existing and validated sample usage code snippet.
- Adding the BibTeX citation for proper attribution.

These changes will make the model more informative, discoverable, and user-friendly for the Hugging Face community.

Files changed (1) hide show
  1. README.md +42 -1
README.md CHANGED
@@ -1,4 +1,29 @@
1
- ## Example
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  ```python
3
  from transformers import AutoModelForCausalLM, AutoTokenizer
4
  import torch
@@ -63,4 +88,20 @@ if matches:
63
  print(code)
64
  else:
65
  print("No Verilog code found in the response!")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ```
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ library_name: transformers
4
+ license: apache-2.0
5
+ tags:
6
+ - code-generation
7
+ - verilog
8
+ ---
9
+
10
+ # QiMeng-SALV: Signal-Aware Learning for Verilog Code Generation
11
+
12
+ This repository contains the `SALV-Qwen2.5-Coder-7B-Instruct` model, an advanced model for Verilog code generation presented in the paper [QiMeng-SALV: Signal-Aware Learning for Verilog Code Generation](https://huggingface.co/papers/2510.19296).
13
+
14
+ QiMeng-SALV introduces a novel framework for Verilog code generation that shifts reinforcement learning optimization from module-level to signal-level rewards. By leveraging Abstract Syntax Tree (AST) analysis and signal-aware verification, it extracts functionally correct code segments from partially incorrect modules, enabling more effective RL training. This method addresses the issue of insufficient functional rewards and achieves state-of-the-art performance on VerilogEval and RTLLM.
15
+
16
+ ## Resources
17
+
18
+ * **Paper**: [https://huggingface.co/papers/2510.19296](https://huggingface.co/papers/2510.19296)
19
+ * **Project Page**: [https://zy1xxx.github.io/SALV](https://zy1xxx.github.io/SALV)
20
+ * **Code**: [https://github.com/zy1xxx/SALV](https://github.com/zy1xxx/SALV)
21
+ * **Dataset**: [https://huggingface.co/datasets/TabCanNotTab/SALV-dataset](https://huggingface.co/datasets/TabCanNotTab/SALV-dataset)
22
+
23
+ ## Usage
24
+
25
+ You can use this model with the `transformers` library:
26
+
27
  ```python
28
  from transformers import AutoModelForCausalLM, AutoTokenizer
29
  import torch
 
88
  print(code)
89
  else:
90
  print("No Verilog code found in the response!")
91
+ ```
92
+
93
+ ## Citation
94
+
95
+ If you find QiMeng-SALV useful, please cite our paper:
96
+
97
+ ```bibtex
98
+ @misc{zhang2025qimengsalvsignalawarelearningverilog,
99
+ title={QiMeng-SALV: Signal-Aware Learning for Verilog Code Generation},
100
+ author={Yang Zhang and Rui Zhang and Jiaming Guo and Lei Huang and Di Huang and Yunpu Zhao and Shuyao Cheng and Pengwei Jin and Chongxiao Li and Zidong Du and Xing Hu and Qi Guo and Yunji Chen},
101
+ year={2025},
102
+ eprint={2510.19296},
103
+ archivePrefix={arXiv},
104
+ primaryClass={cs.LG},
105
+ url={https://arxiv.org/abs/2510.19296},
106
+ }
107
  ```