Datasets:
metadata
license: mit
task_categories:
- text-generation
- text2text-generation
language:
- en
tags:
- writing
- editing
- steerability
pretty_name: SteerBench
size_categories:
- 1K<n<10K
Measuring Steerability in Large Language Models
Official dataset release of a 4D steerability probe (reading difficulty, formality, textual diversity, text length goal-space). Initial probe contains 2,048 prompts used in our work (32 different rewrites over 64 texts).
Dataset format
Each row contains a source text, along with its mappings in goal-space. We provide normalized and unnormalized values of the following for the source text:
- Flesch-Kincaid Grade Level (
reading_difficulty) - Heylighen-Dewaele F-Score (
formality) - Measure of Textual Lexical Diversity (
textual_diversity) - Word count (
text_length)
We also provide goal vectors (delta_* or target_*) for all goal dimensions.
Results
Shown here: steering error of recent models (median (IQR)).
Want to add a model? Reach out at ctrenton at umich dot edu!
| Model family | Model name | SteerBench-2506 (↓) |
| Llama3 | Llama3-8B | 0.495 (0.252) |
| Llama3.1-8B | 0.452 (0.256) | |
| Llama3-70B | 0.452 (0.239) | |
| Llama3.1-70B | 0.452 (0.239) | |
| Llama3.3-70B | 0.452 (0.256) | |
| GPT | GPT-3.5 turbo | 0.535 (0.251) |
| GPT-4 turbo | 0.515 (0.266) | |
| GPT-4o | 0.474 (0.239) | |
| GPT-4.1 | 0.429 (0.203) | |
| OpenAI o-series | o1-mini | 0.495 (0.261)* |
| o3-mini | 0.515 (0.232)* | |
| Deepseek-R1 | Deepseek-R1-Distill-Llama-8B | 0.535 (0.281) |
| Deepseek-R1-Distill-Llama-70B | 0.474 (0.256) | |
| Qwen3 | Qwen-32B (no thinking) | 0.535 (0.271) |
| Qwen-32B (thinking) | 0.535 (0.271) | |
| Qwen-30B-A3B (no thinking) | 0.495 (0.273) | |
| Qwen-30B-A3B (thinking) | 0.495 (0.2273 |