zhuyaoyu commited on
Commit
8a9063b
·
verified ·
1 Parent(s): 60bd7a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -0
README.md CHANGED
@@ -66,6 +66,15 @@ Our evaluation encompasses Verilog benchmarks, including VerilogEval and RTLLM.
66
  | **CodeV-R1-distill (ours)** | 7B | Verilog RTL | 56.2% |
67
  | **CodeV-R1 (ours)** | 7B | Verilog RTL | **72.9%** |
68
 
 
 
 
 
 
 
 
 
 
69
  ### 4. Usage
70
 
71
  CodeV-R1-Distill-Qwen-7B can be utilized in the same manner as Qwen or Llama models.
 
66
  | **CodeV-R1-distill (ours)** | 7B | Verilog RTL | 56.2% |
67
  | **CodeV-R1 (ours)** | 7B | Verilog RTL | **72.9%** |
68
 
69
+ We also plot the results for RTLLM v1.1, including pass rate against model size and test-time scaling under different token/FLOPs budgets.
70
+ <div style="display: flex; gap: 10px;">
71
+ <img src="./assets/rtllm_acc_vs_model_size.png" alt="RTLLM TTS Results" width="1200">
72
+ </div>
73
+ <div style="display: flex; gap: 10px;">
74
+ <img src="./assets/rtllm_tts.png" alt="RTLLM TTS Results" width="500">
75
+ <img src="./assets/rtllm_tts_flops.png" alt="RTLLM TTS FLOPs Results" width="500">
76
+ </div>
77
+
78
  ### 4. Usage
79
 
80
  CodeV-R1-Distill-Qwen-7B can be utilized in the same manner as Qwen or Llama models.