yang-z commited on
Commit
4a07f1c
·
verified ·
1 Parent(s): 6c38889

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -3
README.md CHANGED
@@ -1,3 +1,89 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ metrics:
6
+ - accuracy
7
+ tags:
8
+ - code
9
+ arxiv: 2407.10424
10
+ ---
11
+ <div align="center">
12
+ <img src="./assets/logo.png" style="zoom:25%;" />
13
+ </div>
14
+
15
+ # CodeV:Empowering LLMs for Verilog Generation through Multi-Level Summarization
16
+
17
+ <img src="assets/overview.png" style="zoom:50%;" />
18
+
19
+ CodeV is an innovative series of open-source, instruction-tuned Large Language Models (LLMs) specifically designed for the generation of high-quality Verilog code, addressing the challenges faced by existing models in this domain. **(This repo is under development)**
20
+
21
+
22
+ ## Models and Datasets
23
+
24
+ | | Base Model | CodeV |
25
+ | ---- | --------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
26
+ | 6.7B | [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | [yang-z/CodeV-DS-6.7B](https://huggingface.co/yang-z/CodeV-DS-6.7B) |
27
+ | 7B | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [yang-z/CodeV-CL-7B](https://huggingface.co/yang-z/CodeV-CL-7B) |
28
+ | 7B | [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) | [yang-z/CodeV-QW-7B](https://huggingface.co/yang-z/CodeV-QW-7B) |
29
+ | 7B | [Qwen/Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B) | [yang-z/CodeV-QC-7B](https://huggingface.co/yang-z/CodeV-QC-7B) |
30
+
31
+ ## Test
32
+
33
+ If you want to test the generation capability of existing models on Verilog, you need to install the [VerilogEval](https://github.com/NVlabs/verilog-eval) and [RTLLM](https://github.com/hkust-zhiyao/rtllm) environments.
34
+
35
+ ## Quick Start
36
+
37
+ ```python
38
+ from transformers import pipeline
39
+
40
+ import torch
41
+
42
+
43
+
44
+ prompt= "FILL IN THE QUESTION"
45
+
46
+
47
+
48
+ generator = pipeline(
49
+
50
+ model="CODEV",
51
+
52
+ task="text-generation",
53
+
54
+ torch_dtype=torch.bfloat16,
55
+
56
+ device_map="auto",
57
+
58
+ )
59
+
60
+
61
+
62
+ result = generator(prompt , max_length=2048, num_return_sequences=1, temperature=0.0)
63
+
64
+ response = result[0]["generated_text"]
65
+
66
+ print("Response:", response)
67
+ ```
68
+ ## Paper
69
+ **Arxiv:** <https://arxiv.org/abs/2407.10424>
70
+
71
+ Please cite the paper if you use the models from CodeV.
72
+
73
+ ```
74
+ @misc{yang-z,
75
+ title={CodeV: Empowering LLMs for Verilog Generation through Multi-Level Summarization},
76
+ author={Yang Zhao and Di Huang and Chongxiao Li and Pengwei Jin and Ziyuan Nan and Tianyun Ma and Lei Qi and Yansong Pan and Zhenxing Zhang and Rui Zhang and Xishan Zhang and Zidong Du and Qi Guo and Xing Hu and Yunji Chen},
77
+ year={2024},
78
+ eprint={2407.10424},
79
+ archivePrefix={arXiv},
80
+ primaryClass={cs.PL},
81
+ url={https://arxiv.org/abs/2407.10424},
82
+ }
83
+ ```
84
+ ## Acknowledgements
85
+
86
+ * [Magicoder](https://github.com/ise-uiuc/magicoder): Training code, original datasets and data decontamination
87
+ * [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for CodeV-DeepSeek
88
+ * [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for CodeLlama
89
+ * [CodeQwen](https://github.com/QwenLM/CodeQwen1.5): CodeV-CodeQwen