tchang97 commited on
Commit
67a645f
·
verified ·
1 Parent(s): ab27a19

Update README.md

Browse files

not-a-leaderboard performance board?

Files changed (1) hide show
  1. README.md +87 -1
README.md CHANGED
@@ -16,4 +16,90 @@ size_categories:
16
 
17
  # Measuring Steerability in Large Language Models
18
 
19
- Official dataset release of 4D steerability probe (reading difficulty, formality, textual diversity, text length goal-space).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
  # Measuring Steerability in Large Language Models
18
 
19
+ Official dataset release of 4D steerability probe (reading difficulty, formality, textual diversity, text length goal-space). Initial probe contains 2,048 prompts used in our paper (32 different rewrites over 64 texts).
20
+
21
+ ## Results
22
+
23
+ Here, we will retain the most recent versions of SteerBench for top-performing models. Shown here: steering error of recent models (`mean (std. dev.)`).
24
+ <table>
25
+ <tr>
26
+ <td><b>Model family</b></td>
27
+ <td><b>Model name</b></td>
28
+ <td><b>SteerBench-2506 (↓)</b></td>
29
+ </tr>
30
+ <tr>
31
+ <td rowspan=5><b>Llama3</b></td>
32
+ <td>Llama3-8B</td>
33
+ <td>0.53 (0.22)</td>
34
+ </tr>
35
+ <tr>
36
+ <td>Llama3.1-8B</td>
37
+ <td>0.50 (0.20)</td>
38
+ </tr>
39
+ <tr>
40
+ <td>Llama3-70B</td>
41
+ <td>0.50 (0.21)</td>
42
+ </tr>
43
+ <tr>
44
+ <td>Llama3.1-70B</td>
45
+ <td>0.48 (0.21)</td>
46
+ </tr>
47
+ <tr>
48
+ <td>Llama3.3-70B</td>
49
+ <td>0.49 (0.21)</td>
50
+ </tr>
51
+ <tr>
52
+ <td rowspan=4><b>GPT</b></td>
53
+ <td>GPT-3.5 turbo</td>
54
+ <td>0.55 (0.23)</td>
55
+ </tr>
56
+ <tr>
57
+ <td>GPT-4 turbo</td>
58
+ <td>0.56 (0.22)</td>
59
+ </tr>
60
+ <tr>
61
+ <td>GPT-4o</td>
62
+ <td>0.50 (0.20)</td>
63
+ </tr>
64
+ <tr>
65
+ <td>GPT-4.1</td>
66
+ <td>0.46 (0.19)</td>
67
+ </tr>
68
+ <tr>
69
+ <td rowspan=2><b>OpenAI o-series</b></td>
70
+ <td>o1-mini</td>
71
+ <td><i>0.53 (0.21)*</i></td>
72
+ </tr>
73
+ <tr>
74
+ <td>o3-mini</td>
75
+ <td><i>0.55 (0.22)*</i></td>
76
+ </tr>
77
+ <tr>
78
+ <td rowspan=2><b>Deepseek-R1</b></td>
79
+ <td>Deepseek-R1-Distill-Llama-8B</td>
80
+ <td>0.57 (0.24)</td>
81
+ </tr>
82
+ <tr>
83
+ <td>Deepseek-R1-Distill-Llama-70B</td>
84
+ <td>0.52 (0.22)</td>
85
+ </tr>
86
+ <tr>
87
+ <td rowspan=4><b>Qwen3</b></td>
88
+ <td>Qwen-32B (no thinking)</td>
89
+ <td>0.57 (0.23)</td>
90
+ </tr>
91
+ <tr>
92
+ <td>Qwen-32B (thinking)</td>
93
+ <td>0.57 (0.23)</td>
94
+ </tr>
95
+ <tr>
96
+ <td>Qwen-30B-A3B (no thinking)</td>
97
+ <td>0.52 (0.22)</td>
98
+ </tr>
99
+ <tr>
100
+ <td>Qwen-30B-A3B (thinking)</td>
101
+ <td>0.52 (0.22)</td>
102
+ </tr>
103
+ </table>
104
+ * refusal rate >5%
105
+