Spaces:
Running on T4
Running on T4
File size: 8,940 Bytes
935bdc8 8bc23ce 935bdc8 6cc574d 935bdc8 4f4603a 6cc574d 4f4603a 6cc574d 4f4603a 935bdc8 6cc574d 8bc23ce 6cc574d 8bc23ce | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 | """
UI text strings for the Baguettotron vs Luth Gradio app.
Centralized for reuse and easier i18n.
"""
# ---------------------------------------------------------------------------
# English (default) — also used as key names
# ---------------------------------------------------------------------------
# App identity
TITLE = "Is a government-backed darling any better than a highschoolers' project?"
SUBTITLE = "Baguettotron vs Luth models : one is a fully subsidized project with priority access to public compute infrastructure , the other is a highschooler's project"
# Footprint section (single consolidated comparison table)
HEADING_FOOTPRINT = "## Model comparison"
FOOTPRINT_HEADERS = [
"Model",
"Params",
"Raw Disk/VRAM (MB)",
"Observable on a Phone",
"GGUF Q4_K_M (MB)",
"Source",
]
FOOTPRINT_SUMMARY_TEMPLATE = "**Combined footprint —** Total VRAM (est.): {total_vram:.2f} GB"
FOOTPRINT_INTRO = "Transformers (BF16) disk/VRAM; GGUF sizes from PleIAs/Baguettotron-GGUF (HF) and LEAP bundles."
# Generation settings
HEADING_GENERATION = "## Generation settings (by model family)"
COL_BAGUETTOTRON_HEADING = "**Baguettotron (321M)** — *reasoning*"
COL_LUTH_HEADING = "**Luth models (0.4B–1.7B)** — *instruct*"
LABEL_TEMPERATURE = "Temperature"
LABEL_MAX_TOKENS = "Max tokens"
LABEL_TOP_P = "Top p"
LABEL_TOP_K = "Top k"
LABEL_REPEAT_PENALTY = "Repeat penalty"
INFO_TEMP_BAGUETTOTRON = "Lower for more deterministic reasoning"
INFO_REP_LUTH = "Luth/LFM2 often use ~1.05"
# Live inference
HEADING_LIVE_INFERENCE = "## Live inference"
LABEL_SYSTEM_PROMPT = "System prompt (optional)"
PLACEHOLDER_SYSTEM_PROMPT = "e.g. You are a helpful assistant that answers in French."
LABEL_PROMPT = "Prompt"
PLACEHOLDER_PROMPT = "Enter your prompt here..."
BTN_GENERATE = "Generate"
# Output textbox labels (per model)
LABEL_OUT_BAGUETTOTRON = "Baguettotron (321M)"
LABEL_OUT_LUTH_350 = "Luth-LFM2-350M (0.4B)"
LABEL_OUT_LUTH_06 = "Luth-0.6B-Instruct"
LABEL_OUT_LUTH_07 = "Luth-LFM2-700M"
LABEL_OUT_LUTH_12 = "Luth-LFM2-1.2B"
LABEL_OUT_LUTH_17 = "Luth-1.7B-Instruct"
# Tab tier labels
TIER_SMALL = "~0.3–0.4B (Small)"
TIER_MEDIUM = "~0.6–0.7B (Medium)"
TIER_LARGE = "~1–2B (Large)"
# How to use & context
HEADING_HOW_TO_USE = "## How to use this demo"
HOW_TO_USE = """
**Context:** This app runs the same prompt through **Baguettotron** (PleIAs, 321M) and **five Luth models** (kurakurai, 0.4B–1.7B) so you can compare outputs side by side. The table above summarizes model size, VRAM, and GGUF/LEAP options.
**Steps:**
1. [Baguettotron-GGUF-321M](https://hf.co/PleIAs/Baguettotron-GGUF-321M) claims performance outside of observable reality.
2. the [SYNTH](https://hf.co/datasets/pleias/synth) dataset needs to be audited for contamination before use.
3. the compression for pleias models underperforms compared to identical compression on similar models
4. the ultimate judge of quality is the user's ability to use the model to achieve their goals
Results are grouped by parameter size so you can compare Baguettotron with Luth-LFM2-350M in the Small tab, and the larger Luth models in the Medium and Large tabs.
"""
# Join us (TeamTonic community)
JOIN_US = """
## Join us:
🌟TeamTonic🌟 is always making cool demos! Join our active builder's 🛠️community 👻
[](https://discord.gg/qdfnvSPcqP)
On 🤗Huggingface: [MultiTransformer](https://huggingface.co/MultiTransformer)
On 🌐Github: [Tonic-AI](https://github.com/tonic-ai) & contribute to🌟 [Build Tonic](https://git.tonic-ai.com/contribute)
🤗Big thanks to Yuvi Sharma and all the folks at huggingface for the community grant 🤗
"""
# ---------------------------------------------------------------------------
# Translations: en + fr (for language toggle)
# ---------------------------------------------------------------------------
TRANSLATIONS = {
"en": {
"TITLE": TITLE,
"SUBTITLE": SUBTITLE,
"HEADING_FOOTPRINT": HEADING_FOOTPRINT,
"FOOTPRINT_HEADERS": FOOTPRINT_HEADERS,
"FOOTPRINT_SUMMARY_TEMPLATE": FOOTPRINT_SUMMARY_TEMPLATE,
"FOOTPRINT_INTRO": FOOTPRINT_INTRO,
"HEADING_GENERATION": HEADING_GENERATION,
"COL_BAGUETTOTRON_HEADING": COL_BAGUETTOTRON_HEADING,
"COL_LUTH_HEADING": COL_LUTH_HEADING,
"LABEL_TEMPERATURE": LABEL_TEMPERATURE,
"LABEL_MAX_TOKENS": LABEL_MAX_TOKENS,
"LABEL_TOP_P": LABEL_TOP_P,
"LABEL_TOP_K": LABEL_TOP_K,
"LABEL_REPEAT_PENALTY": LABEL_REPEAT_PENALTY,
"INFO_TEMP_BAGUETTOTRON": INFO_TEMP_BAGUETTOTRON,
"INFO_REP_LUTH": INFO_REP_LUTH,
"HEADING_LIVE_INFERENCE": HEADING_LIVE_INFERENCE,
"LABEL_SYSTEM_PROMPT": LABEL_SYSTEM_PROMPT,
"PLACEHOLDER_SYSTEM_PROMPT": PLACEHOLDER_SYSTEM_PROMPT,
"LABEL_PROMPT": LABEL_PROMPT,
"PLACEHOLDER_PROMPT": PLACEHOLDER_PROMPT,
"BTN_GENERATE": BTN_GENERATE,
"LABEL_OUT_BAGUETTOTRON": LABEL_OUT_BAGUETTOTRON,
"LABEL_OUT_LUTH_350": LABEL_OUT_LUTH_350,
"LABEL_OUT_LUTH_06": LABEL_OUT_LUTH_06,
"LABEL_OUT_LUTH_07": LABEL_OUT_LUTH_07,
"LABEL_OUT_LUTH_12": LABEL_OUT_LUTH_12,
"LABEL_OUT_LUTH_17": LABEL_OUT_LUTH_17,
"TIER_SMALL": TIER_SMALL,
"TIER_MEDIUM": TIER_MEDIUM,
"TIER_LARGE": TIER_LARGE,
"HEADING_HOW_TO_USE": HEADING_HOW_TO_USE,
"HOW_TO_USE": HOW_TO_USE,
"JOIN_US": JOIN_US,
},
"fr": {
"TITLE": "Un chouchou subventionné vaut-il mieux qu'un projet de lycéen ?",
"SUBTITLE": "Baguettotron vs Luth : l'un est un projet subventionné avec accès prioritaire au calcul public, l'autre est un projet de lycéen.",
"HEADING_FOOTPRINT": "## Comparaison des modèles",
"FOOTPRINT_HEADERS": [
"Modèle",
"Params",
"Disque/VRAM brut (Mo)",
"Visible sur téléphone",
"GGUF Q4_K_M (Mo)",
"Source",
],
"FOOTPRINT_SUMMARY_TEMPLATE": "**Empreinte totale —** VRAM (est.) : {total_vram:.2f} Go",
"FOOTPRINT_INTRO": "Transformers (BF16) disque/VRAM ; tailles GGUF depuis PleIAs/Baguettotron-GGUF (HF) et bundles LEAP.",
"HEADING_GENERATION": "## Réglages de génération (par famille de modèle)",
"COL_BAGUETTOTRON_HEADING": "**Baguettotron (321M)** — *raisonnement*",
"COL_LUTH_HEADING": "**Luth (0,4B–1,7B)** — *instruct*",
"LABEL_TEMPERATURE": "Température",
"LABEL_MAX_TOKENS": "Tokens max",
"LABEL_TOP_P": "Top p",
"LABEL_TOP_K": "Top k",
"LABEL_REPEAT_PENALTY": "Pénalité de répétition",
"INFO_TEMP_BAGUETTOTRON": "Plus bas pour un raisonnement plus déterministe",
"INFO_REP_LUTH": "Luth/LFM2 utilisent souvent ~1,05",
"HEADING_LIVE_INFERENCE": "## Inférence en direct",
"LABEL_SYSTEM_PROMPT": "Prompt système (optionnel)",
"PLACEHOLDER_SYSTEM_PROMPT": "ex. Tu es un assistant qui répond en français.",
"LABEL_PROMPT": "Prompt",
"PLACEHOLDER_PROMPT": "Saisissez votre prompt ici…",
"BTN_GENERATE": "Générer",
"LABEL_OUT_BAGUETTOTRON": "Baguettotron (321M)",
"LABEL_OUT_LUTH_350": "Luth-LFM2-350M (0,4B)",
"LABEL_OUT_LUTH_06": "Luth-0.6B-Instruct",
"LABEL_OUT_LUTH_07": "Luth-LFM2-700M",
"LABEL_OUT_LUTH_12": "Luth-LFM2-1.2B",
"LABEL_OUT_LUTH_17": "Luth-1.7B-Instruct",
"TIER_SMALL": "~0,3–0,4B (Petit)",
"TIER_MEDIUM": "~0,6–0,7B (Moyen)",
"TIER_LARGE": "~1–2B (Grand)",
"HEADING_HOW_TO_USE": "## Comment utiliser cette démo",
"HOW_TO_USE": """
**Contexte :** Cette app envoie le même prompt à **Baguettotron** (PleIAs, 321M) et aux **cinq modèles Luth** (kurakurai, 0,4B–1,7B) pour comparer les sorties côte à côte. Le tableau ci-dessus résume la taille, la VRAM et les options GGUF/LEAP.
**Étapes :**
1. [Baguettotron-GGUF-321M](https://hf.co/PleIAs/Baguettotron-GGUF-321M) prétend des perfs hors du réel observable.
2. le jeu [SYNTH](https://hf.co/datasets/pleias/synth) doit être audité pour contamination avant usage.
3. la compression des modèles PleIAs sous-performe par rapport à une compression identique sur des modèles similaires.
4. le critère final reste la capacité de l'utilisateur à atteindre ses objectifs avec le modèle.
Les résultats sont regroupés par taille pour comparer Baguettotron avec Luth-LFM2-350M dans l'onglet Petit, et les Luth plus gros dans Moyen et Grand.
""",
"JOIN_US": JOIN_US, # same as en (links + community)
},
}
def get_strings(locale: str) -> dict:
"""Return the full dict of UI strings for the given locale. Fallback to 'en'."""
return TRANSLATIONS.get(locale, TRANSLATIONS["en"]).copy()
|