metadata
title: Single Shot Brevity Training
emoji: 📈
colorFrom: blue
colorTo: gray
sdk: static
pinned: false
short_description: Using one example to train an LLM for informational brevity
Single-Shot Brevity Training
An experiment exploring how to train Large Language Models to provide concise, informative responses using a single example rather than abstract instructions.
Overview
This Hugging Face Space showcases an approach to addressing LLM verbosity by demonstrating the desired response format with one concrete example in the system prompt.
Key Findings
- Response lengths varied by 5.5x across 14 tested models
- Most concise: AI21 Jamba Large (295 words)
- Most verbose: OpenAI GPT-OSS-120B (1,632 words)
- Optimized examples achieved 60-75% word reduction
Resources
- Full GitHub Repository - Complete data, analysis, and system prompts
- Raw Response Data - Baseline outputs from all models
- Optimized Examples - Demonstrating ideal brevity
Created By
Daniel Rosehill - Part of ongoing research in LLM optimization and prompt engineering