Spaces:
Running on T4
Running on T4
Joseph Pollack
adds actually more interface improvements, language toggles , cpu fallbacks
8bc23ce unverified | """ | |
| UI text strings for the Baguettotron vs Luth Gradio app. | |
| Centralized for reuse and easier i18n. | |
| """ | |
| # --------------------------------------------------------------------------- | |
| # English (default) — also used as key names | |
| # --------------------------------------------------------------------------- | |
| # App identity | |
| TITLE = "Is a government-backed darling any better than a highschoolers' project?" | |
| SUBTITLE = "Baguettotron vs Luth models : one is a fully subsidized project with priority access to public compute infrastructure , the other is a highschooler's project" | |
| # Footprint section (single consolidated comparison table) | |
| HEADING_FOOTPRINT = "## Model comparison" | |
| FOOTPRINT_HEADERS = [ | |
| "Model", | |
| "Params", | |
| "Raw Disk/VRAM (MB)", | |
| "Observable on a Phone", | |
| "GGUF Q4_K_M (MB)", | |
| "Source", | |
| ] | |
| FOOTPRINT_SUMMARY_TEMPLATE = "**Combined footprint —** Total VRAM (est.): {total_vram:.2f} GB" | |
| FOOTPRINT_INTRO = "Transformers (BF16) disk/VRAM; GGUF sizes from PleIAs/Baguettotron-GGUF (HF) and LEAP bundles." | |
| # Generation settings | |
| HEADING_GENERATION = "## Generation settings (by model family)" | |
| COL_BAGUETTOTRON_HEADING = "**Baguettotron (321M)** — *reasoning*" | |
| COL_LUTH_HEADING = "**Luth models (0.4B–1.7B)** — *instruct*" | |
| LABEL_TEMPERATURE = "Temperature" | |
| LABEL_MAX_TOKENS = "Max tokens" | |
| LABEL_TOP_P = "Top p" | |
| LABEL_TOP_K = "Top k" | |
| LABEL_REPEAT_PENALTY = "Repeat penalty" | |
| INFO_TEMP_BAGUETTOTRON = "Lower for more deterministic reasoning" | |
| INFO_REP_LUTH = "Luth/LFM2 often use ~1.05" | |
| # Live inference | |
| HEADING_LIVE_INFERENCE = "## Live inference" | |
| LABEL_SYSTEM_PROMPT = "System prompt (optional)" | |
| PLACEHOLDER_SYSTEM_PROMPT = "e.g. You are a helpful assistant that answers in French." | |
| LABEL_PROMPT = "Prompt" | |
| PLACEHOLDER_PROMPT = "Enter your prompt here..." | |
| BTN_GENERATE = "Generate" | |
| # Output textbox labels (per model) | |
| LABEL_OUT_BAGUETTOTRON = "Baguettotron (321M)" | |
| LABEL_OUT_LUTH_350 = "Luth-LFM2-350M (0.4B)" | |
| LABEL_OUT_LUTH_06 = "Luth-0.6B-Instruct" | |
| LABEL_OUT_LUTH_07 = "Luth-LFM2-700M" | |
| LABEL_OUT_LUTH_12 = "Luth-LFM2-1.2B" | |
| LABEL_OUT_LUTH_17 = "Luth-1.7B-Instruct" | |
| # Tab tier labels | |
| TIER_SMALL = "~0.3–0.4B (Small)" | |
| TIER_MEDIUM = "~0.6–0.7B (Medium)" | |
| TIER_LARGE = "~1–2B (Large)" | |
| # How to use & context | |
| HEADING_HOW_TO_USE = "## How to use this demo" | |
| HOW_TO_USE = """ | |
| **Context:** This app runs the same prompt through **Baguettotron** (PleIAs, 321M) and **five Luth models** (kurakurai, 0.4B–1.7B) so you can compare outputs side by side. The table above summarizes model size, VRAM, and GGUF/LEAP options. | |
| **Steps:** | |
| 1. [Baguettotron-GGUF-321M](https://hf.co/PleIAs/Baguettotron-GGUF-321M) claims performance outside of observable reality. | |
| 2. the [SYNTH](https://hf.co/datasets/pleias/synth) dataset needs to be audited for contamination before use. | |
| 3. the compression for pleias models underperforms compared to identical compression on similar models | |
| 4. the ultimate judge of quality is the user's ability to use the model to achieve their goals | |
| Results are grouped by parameter size so you can compare Baguettotron with Luth-LFM2-350M in the Small tab, and the larger Luth models in the Medium and Large tabs. | |
| """ | |
| # Join us (TeamTonic community) | |
| JOIN_US = """ | |
| ## Join us: | |
| 🌟TeamTonic🌟 is always making cool demos! Join our active builder's 🛠️community 👻 | |
| [](https://discord.gg/qdfnvSPcqP) | |
| On 🤗Huggingface: [MultiTransformer](https://huggingface.co/MultiTransformer) | |
| On 🌐Github: [Tonic-AI](https://github.com/tonic-ai) & contribute to🌟 [Build Tonic](https://git.tonic-ai.com/contribute) | |
| 🤗Big thanks to Yuvi Sharma and all the folks at huggingface for the community grant 🤗 | |
| """ | |
| # --------------------------------------------------------------------------- | |
| # Translations: en + fr (for language toggle) | |
| # --------------------------------------------------------------------------- | |
| TRANSLATIONS = { | |
| "en": { | |
| "TITLE": TITLE, | |
| "SUBTITLE": SUBTITLE, | |
| "HEADING_FOOTPRINT": HEADING_FOOTPRINT, | |
| "FOOTPRINT_HEADERS": FOOTPRINT_HEADERS, | |
| "FOOTPRINT_SUMMARY_TEMPLATE": FOOTPRINT_SUMMARY_TEMPLATE, | |
| "FOOTPRINT_INTRO": FOOTPRINT_INTRO, | |
| "HEADING_GENERATION": HEADING_GENERATION, | |
| "COL_BAGUETTOTRON_HEADING": COL_BAGUETTOTRON_HEADING, | |
| "COL_LUTH_HEADING": COL_LUTH_HEADING, | |
| "LABEL_TEMPERATURE": LABEL_TEMPERATURE, | |
| "LABEL_MAX_TOKENS": LABEL_MAX_TOKENS, | |
| "LABEL_TOP_P": LABEL_TOP_P, | |
| "LABEL_TOP_K": LABEL_TOP_K, | |
| "LABEL_REPEAT_PENALTY": LABEL_REPEAT_PENALTY, | |
| "INFO_TEMP_BAGUETTOTRON": INFO_TEMP_BAGUETTOTRON, | |
| "INFO_REP_LUTH": INFO_REP_LUTH, | |
| "HEADING_LIVE_INFERENCE": HEADING_LIVE_INFERENCE, | |
| "LABEL_SYSTEM_PROMPT": LABEL_SYSTEM_PROMPT, | |
| "PLACEHOLDER_SYSTEM_PROMPT": PLACEHOLDER_SYSTEM_PROMPT, | |
| "LABEL_PROMPT": LABEL_PROMPT, | |
| "PLACEHOLDER_PROMPT": PLACEHOLDER_PROMPT, | |
| "BTN_GENERATE": BTN_GENERATE, | |
| "LABEL_OUT_BAGUETTOTRON": LABEL_OUT_BAGUETTOTRON, | |
| "LABEL_OUT_LUTH_350": LABEL_OUT_LUTH_350, | |
| "LABEL_OUT_LUTH_06": LABEL_OUT_LUTH_06, | |
| "LABEL_OUT_LUTH_07": LABEL_OUT_LUTH_07, | |
| "LABEL_OUT_LUTH_12": LABEL_OUT_LUTH_12, | |
| "LABEL_OUT_LUTH_17": LABEL_OUT_LUTH_17, | |
| "TIER_SMALL": TIER_SMALL, | |
| "TIER_MEDIUM": TIER_MEDIUM, | |
| "TIER_LARGE": TIER_LARGE, | |
| "HEADING_HOW_TO_USE": HEADING_HOW_TO_USE, | |
| "HOW_TO_USE": HOW_TO_USE, | |
| "JOIN_US": JOIN_US, | |
| }, | |
| "fr": { | |
| "TITLE": "Un chouchou subventionné vaut-il mieux qu'un projet de lycéen ?", | |
| "SUBTITLE": "Baguettotron vs Luth : l'un est un projet subventionné avec accès prioritaire au calcul public, l'autre est un projet de lycéen.", | |
| "HEADING_FOOTPRINT": "## Comparaison des modèles", | |
| "FOOTPRINT_HEADERS": [ | |
| "Modèle", | |
| "Params", | |
| "Disque/VRAM brut (Mo)", | |
| "Visible sur téléphone", | |
| "GGUF Q4_K_M (Mo)", | |
| "Source", | |
| ], | |
| "FOOTPRINT_SUMMARY_TEMPLATE": "**Empreinte totale —** VRAM (est.) : {total_vram:.2f} Go", | |
| "FOOTPRINT_INTRO": "Transformers (BF16) disque/VRAM ; tailles GGUF depuis PleIAs/Baguettotron-GGUF (HF) et bundles LEAP.", | |
| "HEADING_GENERATION": "## Réglages de génération (par famille de modèle)", | |
| "COL_BAGUETTOTRON_HEADING": "**Baguettotron (321M)** — *raisonnement*", | |
| "COL_LUTH_HEADING": "**Luth (0,4B–1,7B)** — *instruct*", | |
| "LABEL_TEMPERATURE": "Température", | |
| "LABEL_MAX_TOKENS": "Tokens max", | |
| "LABEL_TOP_P": "Top p", | |
| "LABEL_TOP_K": "Top k", | |
| "LABEL_REPEAT_PENALTY": "Pénalité de répétition", | |
| "INFO_TEMP_BAGUETTOTRON": "Plus bas pour un raisonnement plus déterministe", | |
| "INFO_REP_LUTH": "Luth/LFM2 utilisent souvent ~1,05", | |
| "HEADING_LIVE_INFERENCE": "## Inférence en direct", | |
| "LABEL_SYSTEM_PROMPT": "Prompt système (optionnel)", | |
| "PLACEHOLDER_SYSTEM_PROMPT": "ex. Tu es un assistant qui répond en français.", | |
| "LABEL_PROMPT": "Prompt", | |
| "PLACEHOLDER_PROMPT": "Saisissez votre prompt ici…", | |
| "BTN_GENERATE": "Générer", | |
| "LABEL_OUT_BAGUETTOTRON": "Baguettotron (321M)", | |
| "LABEL_OUT_LUTH_350": "Luth-LFM2-350M (0,4B)", | |
| "LABEL_OUT_LUTH_06": "Luth-0.6B-Instruct", | |
| "LABEL_OUT_LUTH_07": "Luth-LFM2-700M", | |
| "LABEL_OUT_LUTH_12": "Luth-LFM2-1.2B", | |
| "LABEL_OUT_LUTH_17": "Luth-1.7B-Instruct", | |
| "TIER_SMALL": "~0,3–0,4B (Petit)", | |
| "TIER_MEDIUM": "~0,6–0,7B (Moyen)", | |
| "TIER_LARGE": "~1–2B (Grand)", | |
| "HEADING_HOW_TO_USE": "## Comment utiliser cette démo", | |
| "HOW_TO_USE": """ | |
| **Contexte :** Cette app envoie le même prompt à **Baguettotron** (PleIAs, 321M) et aux **cinq modèles Luth** (kurakurai, 0,4B–1,7B) pour comparer les sorties côte à côte. Le tableau ci-dessus résume la taille, la VRAM et les options GGUF/LEAP. | |
| **Étapes :** | |
| 1. [Baguettotron-GGUF-321M](https://hf.co/PleIAs/Baguettotron-GGUF-321M) prétend des perfs hors du réel observable. | |
| 2. le jeu [SYNTH](https://hf.co/datasets/pleias/synth) doit être audité pour contamination avant usage. | |
| 3. la compression des modèles PleIAs sous-performe par rapport à une compression identique sur des modèles similaires. | |
| 4. le critère final reste la capacité de l'utilisateur à atteindre ses objectifs avec le modèle. | |
| Les résultats sont regroupés par taille pour comparer Baguettotron avec Luth-LFM2-350M dans l'onglet Petit, et les Luth plus gros dans Moyen et Grand. | |
| """, | |
| "JOIN_US": JOIN_US, # same as en (links + community) | |
| }, | |
| } | |
| def get_strings(locale: str) -> dict: | |
| """Return the full dict of UI strings for the given locale. Fallback to 'en'.""" | |
| return TRANSLATIONS.get(locale, TRANSLATIONS["en"]).copy() | |