Commit ·
50271b5
1
Parent(s): 82102ea
Create comprehensive Hugging Face Space for Single-Shot Brevity Training experiment
Browse filesEnhanced the static Space to showcase the LLM brevity training experiment with:
- Complete HTML page detailing the problem, methodology, and findings
- Professional gradient design with responsive layout
- Key statistics and model performance comparisons
- Embedded visualization charts (bar chart and 4-panel analysis)
- Links to GitHub repository and resources
- Updated README with experiment overview and key findings
- Configured Git LFS for PNG image files
Generated with Claude Code (https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- .gitattributes +1 -0
- README.md +24 -1
- index.html +116 -8
- style.css +242 -14
- verbosity_analysis.png +3 -0
- verbosity_bar_chart.png +3 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -8,4 +8,27 @@ pinned: false
|
|
| 8 |
short_description: Using one example to train an LLM for informational brevity
|
| 9 |
---
|
| 10 |
|
| 11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
short_description: Using one example to train an LLM for informational brevity
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# Single-Shot Brevity Training
|
| 12 |
+
|
| 13 |
+
An experiment exploring how to train Large Language Models to provide concise, informative responses using a single example rather than abstract instructions.
|
| 14 |
+
|
| 15 |
+
## Overview
|
| 16 |
+
|
| 17 |
+
This Hugging Face Space showcases an approach to addressing LLM verbosity by demonstrating the desired response format with one concrete example in the system prompt.
|
| 18 |
+
|
| 19 |
+
## Key Findings
|
| 20 |
+
|
| 21 |
+
- Response lengths varied by **5.5x** across 14 tested models
|
| 22 |
+
- Most concise: AI21 Jamba Large (295 words)
|
| 23 |
+
- Most verbose: OpenAI GPT-OSS-120B (1,632 words)
|
| 24 |
+
- Optimized examples achieved **60-75% word reduction**
|
| 25 |
+
|
| 26 |
+
## Resources
|
| 27 |
+
|
| 28 |
+
- [Full GitHub Repository](https://github.com/danielrosehill/Single-Shot-Brevity-Training) - Complete data, analysis, and system prompts
|
| 29 |
+
- [Raw Response Data](https://github.com/danielrosehill/Single-Shot-Brevity-Training/tree/main/responses) - Baseline outputs from all models
|
| 30 |
+
- [Optimized Examples](https://github.com/danielrosehill/Single-Shot-Brevity-Training/tree/main/optimized) - Demonstrating ideal brevity
|
| 31 |
+
|
| 32 |
+
## Created By
|
| 33 |
+
|
| 34 |
+
[Daniel Rosehill](https://danielrosehill.com) - Part of ongoing research in LLM optimization and prompt engineering
|
index.html
CHANGED
|
@@ -3,17 +3,125 @@
|
|
| 3 |
<head>
|
| 4 |
<meta charset="utf-8" />
|
| 5 |
<meta name="viewport" content="width=device-width" />
|
| 6 |
-
<title>
|
| 7 |
<link rel="stylesheet" href="style.css" />
|
| 8 |
</head>
|
| 9 |
<body>
|
| 10 |
-
<div class="
|
| 11 |
-
<
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
</div>
|
| 18 |
</body>
|
| 19 |
</html>
|
|
|
|
| 3 |
<head>
|
| 4 |
<meta charset="utf-8" />
|
| 5 |
<meta name="viewport" content="width=device-width" />
|
| 6 |
+
<title>Single-Shot Brevity Training | LLM Response Optimization</title>
|
| 7 |
<link rel="stylesheet" href="style.css" />
|
| 8 |
</head>
|
| 9 |
<body>
|
| 10 |
+
<div class="container">
|
| 11 |
+
<header>
|
| 12 |
+
<h1>Single-Shot Brevity Training</h1>
|
| 13 |
+
<p class="subtitle">Using One Example to Train LLMs for Informational Brevity</p>
|
| 14 |
+
<div class="links">
|
| 15 |
+
<a href="https://github.com/danielrosehill/Single-Shot-Brevity-Training" target="_blank" class="btn">View on GitHub</a>
|
| 16 |
+
</div>
|
| 17 |
+
</header>
|
| 18 |
+
|
| 19 |
+
<section class="card">
|
| 20 |
+
<h2>The Problem</h2>
|
| 21 |
+
<p>Large Language Models often generate excessively verbose responses, even when concise, informative answers would be more valuable. This experiment explores a simple yet effective approach to guide models toward brevity without sacrificing information quality.</p>
|
| 22 |
+
</section>
|
| 23 |
+
|
| 24 |
+
<section class="card">
|
| 25 |
+
<h2>The Approach</h2>
|
| 26 |
+
<p>Rather than abstract instructions like "be concise," this framework uses <strong>single-shot training</strong>: demonstrating the desired format with one concrete example in the system prompt.</p>
|
| 27 |
+
|
| 28 |
+
<h3>Two-Phase Methodology</h3>
|
| 29 |
+
<div class="phase">
|
| 30 |
+
<h4>Phase 1: Baseline Evaluation</h4>
|
| 31 |
+
<p>Tested 14 models using a standardized product recommendation prompt (power bank selection) without any brevity instructions to establish natural response lengths.</p>
|
| 32 |
+
</div>
|
| 33 |
+
|
| 34 |
+
<div class="phase">
|
| 35 |
+
<h4>Phase 2: Single-Shot Training</h4>
|
| 36 |
+
<p>Selected models received system prompts containing one optimized response example to guide future outputs toward similar brevity.</p>
|
| 37 |
+
</div>
|
| 38 |
+
</section>
|
| 39 |
+
|
| 40 |
+
<section class="card highlight">
|
| 41 |
+
<h2>Key Findings</h2>
|
| 42 |
+
|
| 43 |
+
<div class="stat-grid">
|
| 44 |
+
<div class="stat">
|
| 45 |
+
<div class="stat-number">5.5x</div>
|
| 46 |
+
<div class="stat-label">Difference between longest and shortest responses</div>
|
| 47 |
+
</div>
|
| 48 |
+
<div class="stat">
|
| 49 |
+
<div class="stat-number">794</div>
|
| 50 |
+
<div class="stat-label">Mean response length (words)</div>
|
| 51 |
+
</div>
|
| 52 |
+
<div class="stat">
|
| 53 |
+
<div class="stat-number">60-75%</div>
|
| 54 |
+
<div class="stat-label">Word reduction in optimized examples</div>
|
| 55 |
+
</div>
|
| 56 |
+
</div>
|
| 57 |
+
|
| 58 |
+
<h3>Model Response Length Comparison</h3>
|
| 59 |
+
<div class="chart-container">
|
| 60 |
+
<img src="verbosity_bar_chart.png" alt="Bar chart comparing word counts across 14 LLM models" class="chart-image">
|
| 61 |
+
<p class="chart-caption">Comparison of response lengths across 14 evaluated models</p>
|
| 62 |
+
</div>
|
| 63 |
+
|
| 64 |
+
<h3>Comprehensive Verbosity Analysis</h3>
|
| 65 |
+
<div class="chart-container">
|
| 66 |
+
<img src="verbosity_analysis.png" alt="Four-panel analysis of response verbosity characteristics" class="chart-image">
|
| 67 |
+
<p class="chart-caption">Multi-faceted examination of response characteristics and patterns</p>
|
| 68 |
+
</div>
|
| 69 |
+
|
| 70 |
+
<h3>Response Length Variation</h3>
|
| 71 |
+
<ul>
|
| 72 |
+
<li><strong>Longest:</strong> 1,632 words (OpenAI GPT-OSS-120B)</li>
|
| 73 |
+
<li><strong>Shortest:</strong> 295 words (AI21 Jamba Large)</li>
|
| 74 |
+
<li><strong>Standard deviation:</strong> 456 words</li>
|
| 75 |
+
</ul>
|
| 76 |
+
|
| 77 |
+
<h3>Most Concise Performers</h3>
|
| 78 |
+
<ol class="model-list">
|
| 79 |
+
<li><strong>AI21 Jamba Large</strong> - 295 words</li>
|
| 80 |
+
<li><strong>Mistral Large</strong> - 352 words</li>
|
| 81 |
+
<li><strong>Meta Llama 4 Maverick</strong> - 397 words</li>
|
| 82 |
+
</ol>
|
| 83 |
+
|
| 84 |
+
<h3>Most Verbose Performers</h3>
|
| 85 |
+
<ol class="model-list">
|
| 86 |
+
<li><strong>OpenAI GPT-OSS-120B</strong> - 1,632 words</li>
|
| 87 |
+
<li><strong>Google Gemini 2.5 Flash</strong> - 1,607 words</li>
|
| 88 |
+
</ol>
|
| 89 |
+
</section>
|
| 90 |
+
|
| 91 |
+
<section class="card">
|
| 92 |
+
<h2>Repository Contents</h2>
|
| 93 |
+
<ul>
|
| 94 |
+
<li><strong>Raw Response Data:</strong> Complete baseline outputs from all tested models</li>
|
| 95 |
+
<li><strong>Optimized Examples:</strong> Demonstrating ideal brevity (60-75% word reduction)</li>
|
| 96 |
+
<li><strong>Model-Specific System Prompts:</strong> Implementing single-shot training for practical application</li>
|
| 97 |
+
<li><strong>Statistical Analysis:</strong> Comprehensive comparison of response lengths and patterns</li>
|
| 98 |
+
</ul>
|
| 99 |
+
</section>
|
| 100 |
+
|
| 101 |
+
<section class="card">
|
| 102 |
+
<h2>Practical Applications</h2>
|
| 103 |
+
<p>This approach offers several benefits for LLM deployment:</p>
|
| 104 |
+
<ul>
|
| 105 |
+
<li><strong>Cost Reduction:</strong> Shorter responses mean fewer output tokens and lower API costs</li>
|
| 106 |
+
<li><strong>User Experience:</strong> Concise responses are faster to read and process</li>
|
| 107 |
+
<li><strong>Efficiency:</strong> One example is simpler than complex prompt engineering</li>
|
| 108 |
+
<li><strong>Reusability:</strong> The framework can be adapted to different use cases and domains</li>
|
| 109 |
+
</ul>
|
| 110 |
+
</section>
|
| 111 |
+
|
| 112 |
+
<section class="card">
|
| 113 |
+
<h2>Get Involved</h2>
|
| 114 |
+
<p>This is an open experiment exploring effective LLM training techniques. The repository includes all data, prompts, and analysis for transparency and reproducibility.</p>
|
| 115 |
+
<div class="links">
|
| 116 |
+
<a href="https://github.com/danielrosehill/Single-Shot-Brevity-Training" target="_blank" class="btn btn-primary">Explore the Repository</a>
|
| 117 |
+
<a href="https://github.com/danielrosehill/Single-Shot-Brevity-Training/issues" target="_blank" class="btn">Share Feedback</a>
|
| 118 |
+
</div>
|
| 119 |
+
</section>
|
| 120 |
+
|
| 121 |
+
<footer>
|
| 122 |
+
<p>Created by <a href="https://danielrosehill.com" target="_blank">Daniel Rosehill</a></p>
|
| 123 |
+
<p>Part of ongoing research in LLM optimization and prompt engineering</p>
|
| 124 |
+
</footer>
|
| 125 |
</div>
|
| 126 |
</body>
|
| 127 |
</html>
|
style.css
CHANGED
|
@@ -1,28 +1,256 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
body {
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
}
|
| 5 |
|
| 6 |
h1 {
|
| 7 |
-
font-size:
|
| 8 |
-
margin-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
}
|
| 10 |
|
| 11 |
p {
|
| 12 |
-
color:
|
| 13 |
-
font-size:
|
| 14 |
-
margin-bottom:
|
| 15 |
-
|
| 16 |
}
|
| 17 |
|
| 18 |
.card {
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
}
|
| 25 |
|
| 26 |
-
.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
margin-bottom: 0;
|
| 28 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
* {
|
| 2 |
+
box-sizing: border-box;
|
| 3 |
+
margin: 0;
|
| 4 |
+
padding: 0;
|
| 5 |
+
}
|
| 6 |
+
|
| 7 |
body {
|
| 8 |
+
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
|
| 9 |
+
line-height: 1.6;
|
| 10 |
+
color: #333;
|
| 11 |
+
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
| 12 |
+
min-height: 100vh;
|
| 13 |
+
padding: 2rem 1rem;
|
| 14 |
+
}
|
| 15 |
+
|
| 16 |
+
.container {
|
| 17 |
+
max-width: 900px;
|
| 18 |
+
margin: 0 auto;
|
| 19 |
+
}
|
| 20 |
+
|
| 21 |
+
header {
|
| 22 |
+
text-align: center;
|
| 23 |
+
margin-bottom: 3rem;
|
| 24 |
+
background: white;
|
| 25 |
+
padding: 2.5rem 2rem;
|
| 26 |
+
border-radius: 16px;
|
| 27 |
+
box-shadow: 0 10px 30px rgba(0, 0, 0, 0.15);
|
| 28 |
}
|
| 29 |
|
| 30 |
h1 {
|
| 31 |
+
font-size: 2.5rem;
|
| 32 |
+
margin-bottom: 0.5rem;
|
| 33 |
+
color: #2d3748;
|
| 34 |
+
font-weight: 700;
|
| 35 |
+
}
|
| 36 |
+
|
| 37 |
+
.subtitle {
|
| 38 |
+
font-size: 1.25rem;
|
| 39 |
+
color: #718096;
|
| 40 |
+
margin-bottom: 1.5rem;
|
| 41 |
+
}
|
| 42 |
+
|
| 43 |
+
h2 {
|
| 44 |
+
font-size: 1.8rem;
|
| 45 |
+
margin-bottom: 1rem;
|
| 46 |
+
color: #2d3748;
|
| 47 |
+
font-weight: 600;
|
| 48 |
+
}
|
| 49 |
+
|
| 50 |
+
h3 {
|
| 51 |
+
font-size: 1.3rem;
|
| 52 |
+
margin-top: 1.5rem;
|
| 53 |
+
margin-bottom: 0.75rem;
|
| 54 |
+
color: #4a5568;
|
| 55 |
+
font-weight: 600;
|
| 56 |
+
}
|
| 57 |
+
|
| 58 |
+
h4 {
|
| 59 |
+
font-size: 1.1rem;
|
| 60 |
+
margin-bottom: 0.5rem;
|
| 61 |
+
color: #4a5568;
|
| 62 |
+
font-weight: 600;
|
| 63 |
}
|
| 64 |
|
| 65 |
p {
|
| 66 |
+
color: #4a5568;
|
| 67 |
+
font-size: 1rem;
|
| 68 |
+
margin-bottom: 1rem;
|
| 69 |
+
line-height: 1.7;
|
| 70 |
}
|
| 71 |
|
| 72 |
.card {
|
| 73 |
+
background: white;
|
| 74 |
+
padding: 2rem;
|
| 75 |
+
border-radius: 12px;
|
| 76 |
+
box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
|
| 77 |
+
margin-bottom: 2rem;
|
| 78 |
+
}
|
| 79 |
+
|
| 80 |
+
.card.highlight {
|
| 81 |
+
background: linear-gradient(135deg, #f6f8fb 0%, #ffffff 100%);
|
| 82 |
+
border: 2px solid #667eea;
|
| 83 |
+
}
|
| 84 |
+
|
| 85 |
+
.stat-grid {
|
| 86 |
+
display: grid;
|
| 87 |
+
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
|
| 88 |
+
gap: 1.5rem;
|
| 89 |
+
margin: 2rem 0;
|
| 90 |
+
}
|
| 91 |
+
|
| 92 |
+
.stat {
|
| 93 |
+
text-align: center;
|
| 94 |
+
padding: 1.5rem;
|
| 95 |
+
background: white;
|
| 96 |
+
border-radius: 8px;
|
| 97 |
+
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.05);
|
| 98 |
+
}
|
| 99 |
+
|
| 100 |
+
.stat-number {
|
| 101 |
+
font-size: 2.5rem;
|
| 102 |
+
font-weight: 700;
|
| 103 |
+
color: #667eea;
|
| 104 |
+
margin-bottom: 0.5rem;
|
| 105 |
+
}
|
| 106 |
+
|
| 107 |
+
.stat-label {
|
| 108 |
+
font-size: 0.9rem;
|
| 109 |
+
color: #718096;
|
| 110 |
+
line-height: 1.4;
|
| 111 |
+
}
|
| 112 |
+
|
| 113 |
+
.phase {
|
| 114 |
+
background: #f7fafc;
|
| 115 |
+
padding: 1.5rem;
|
| 116 |
+
border-radius: 8px;
|
| 117 |
+
margin-bottom: 1rem;
|
| 118 |
+
border-left: 4px solid #667eea;
|
| 119 |
+
}
|
| 120 |
+
|
| 121 |
+
ul, ol {
|
| 122 |
+
margin-left: 1.5rem;
|
| 123 |
+
margin-bottom: 1rem;
|
| 124 |
+
}
|
| 125 |
+
|
| 126 |
+
li {
|
| 127 |
+
margin-bottom: 0.5rem;
|
| 128 |
+
color: #4a5568;
|
| 129 |
+
line-height: 1.6;
|
| 130 |
}
|
| 131 |
|
| 132 |
+
.model-list {
|
| 133 |
+
background: #f7fafc;
|
| 134 |
+
padding: 1.5rem;
|
| 135 |
+
border-radius: 8px;
|
| 136 |
+
margin-bottom: 1rem;
|
| 137 |
+
}
|
| 138 |
+
|
| 139 |
+
.model-list li {
|
| 140 |
+
margin-bottom: 0.75rem;
|
| 141 |
+
}
|
| 142 |
+
|
| 143 |
+
.chart-container {
|
| 144 |
+
margin: 2rem 0;
|
| 145 |
+
text-align: center;
|
| 146 |
+
}
|
| 147 |
+
|
| 148 |
+
.chart-image {
|
| 149 |
+
max-width: 100%;
|
| 150 |
+
height: auto;
|
| 151 |
+
border-radius: 8px;
|
| 152 |
+
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
|
| 153 |
+
margin-bottom: 0.75rem;
|
| 154 |
+
}
|
| 155 |
+
|
| 156 |
+
.chart-caption {
|
| 157 |
+
font-size: 0.9rem;
|
| 158 |
+
color: #718096;
|
| 159 |
+
font-style: italic;
|
| 160 |
margin-bottom: 0;
|
| 161 |
}
|
| 162 |
+
|
| 163 |
+
.links {
|
| 164 |
+
display: flex;
|
| 165 |
+
gap: 1rem;
|
| 166 |
+
justify-content: center;
|
| 167 |
+
flex-wrap: wrap;
|
| 168 |
+
margin-top: 1.5rem;
|
| 169 |
+
}
|
| 170 |
+
|
| 171 |
+
.btn {
|
| 172 |
+
display: inline-block;
|
| 173 |
+
padding: 0.75rem 1.5rem;
|
| 174 |
+
background: #667eea;
|
| 175 |
+
color: white;
|
| 176 |
+
text-decoration: none;
|
| 177 |
+
border-radius: 8px;
|
| 178 |
+
font-weight: 600;
|
| 179 |
+
transition: all 0.3s ease;
|
| 180 |
+
box-shadow: 0 4px 6px rgba(102, 126, 234, 0.3);
|
| 181 |
+
}
|
| 182 |
+
|
| 183 |
+
.btn:hover {
|
| 184 |
+
background: #5a67d8;
|
| 185 |
+
transform: translateY(-2px);
|
| 186 |
+
box-shadow: 0 6px 12px rgba(102, 126, 234, 0.4);
|
| 187 |
+
}
|
| 188 |
+
|
| 189 |
+
.btn-primary {
|
| 190 |
+
background: #764ba2;
|
| 191 |
+
box-shadow: 0 4px 6px rgba(118, 75, 162, 0.3);
|
| 192 |
+
}
|
| 193 |
+
|
| 194 |
+
.btn-primary:hover {
|
| 195 |
+
background: #6b3f91;
|
| 196 |
+
box-shadow: 0 6px 12px rgba(118, 75, 162, 0.4);
|
| 197 |
+
}
|
| 198 |
+
|
| 199 |
+
footer {
|
| 200 |
+
text-align: center;
|
| 201 |
+
padding: 2rem;
|
| 202 |
+
background: white;
|
| 203 |
+
border-radius: 12px;
|
| 204 |
+
margin-top: 2rem;
|
| 205 |
+
box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
|
| 206 |
+
}
|
| 207 |
+
|
| 208 |
+
footer p {
|
| 209 |
+
margin-bottom: 0.5rem;
|
| 210 |
+
color: #718096;
|
| 211 |
+
}
|
| 212 |
+
|
| 213 |
+
footer a {
|
| 214 |
+
color: #667eea;
|
| 215 |
+
text-decoration: none;
|
| 216 |
+
font-weight: 600;
|
| 217 |
+
}
|
| 218 |
+
|
| 219 |
+
footer a:hover {
|
| 220 |
+
text-decoration: underline;
|
| 221 |
+
}
|
| 222 |
+
|
| 223 |
+
@media (max-width: 768px) {
|
| 224 |
+
body {
|
| 225 |
+
padding: 1rem 0.5rem;
|
| 226 |
+
}
|
| 227 |
+
|
| 228 |
+
h1 {
|
| 229 |
+
font-size: 2rem;
|
| 230 |
+
}
|
| 231 |
+
|
| 232 |
+
.subtitle {
|
| 233 |
+
font-size: 1.1rem;
|
| 234 |
+
}
|
| 235 |
+
|
| 236 |
+
.card {
|
| 237 |
+
padding: 1.5rem;
|
| 238 |
+
}
|
| 239 |
+
|
| 240 |
+
header {
|
| 241 |
+
padding: 2rem 1.5rem;
|
| 242 |
+
}
|
| 243 |
+
|
| 244 |
+
.stat-grid {
|
| 245 |
+
grid-template-columns: 1fr;
|
| 246 |
+
}
|
| 247 |
+
|
| 248 |
+
.links {
|
| 249 |
+
flex-direction: column;
|
| 250 |
+
}
|
| 251 |
+
|
| 252 |
+
.btn {
|
| 253 |
+
width: 100%;
|
| 254 |
+
text-align: center;
|
| 255 |
+
}
|
| 256 |
+
}
|
verbosity_analysis.png
ADDED
|
Git LFS Details
|
verbosity_bar_chart.png
ADDED
|
Git LFS Details
|