danielrosehill's picture
Create comprehensive Hugging Face Space for Single-Shot Brevity Training experiment
50271b5
metadata
title: Single Shot Brevity Training
emoji: 📈
colorFrom: blue
colorTo: gray
sdk: static
pinned: false
short_description: Using one example to train an LLM for informational brevity

Single-Shot Brevity Training

An experiment exploring how to train Large Language Models to provide concise, informative responses using a single example rather than abstract instructions.

Overview

This Hugging Face Space showcases an approach to addressing LLM verbosity by demonstrating the desired response format with one concrete example in the system prompt.

Key Findings

  • Response lengths varied by 5.5x across 14 tested models
  • Most concise: AI21 Jamba Large (295 words)
  • Most verbose: OpenAI GPT-OSS-120B (1,632 words)
  • Optimized examples achieved 60-75% word reduction

Resources

Created By

Daniel Rosehill - Part of ongoing research in LLM optimization and prompt engineering