โš ๏ธ Conference talk demo โ€” not production weights.

This model accompanies a conference keynote on local on-device AI. Published as a reference for the fine-tuning patterns shown on stage โ€” not a deployable artefact. No security audit, no SLA, pinned to the talk's state.


LogReg Intent Classifier

Base scikit-learn LogisticRegression, multinomial, L2 penalty
License Apache-2.0 (this repo) โ€” but inputs are EmbeddingGemma vectors so the Gemma Terms cover the embedding step
Training script training/train_intent_logreg.py
Method LogReg on FT-EmbeddingGemma's 768-dim output vectors. Held-out 90/10 split. ~2 minutes on CPU.
Training data Same as Gemma3-1B intent: data/training-data/gemma3_intent_{scenario}.jsonl (re-embedded with the FT EmbeddingGemma)
Hardware CPU is sufficient. Requires the FT EmbeddingGemma llama-server running on port 9092/9096 to embed training examples.
Intended use Replaces the 1B generative classifier as the primary intent router. ~10ms per query (vs ~200ms for the 1B). Same accuracy on the standard eval set.
Out of scope Anything that requires generation (it's a 3-way classifier). Low-confidence predictions (< 0.60 threshold, configurable in intent_classifier_logreg.py) are overridden to direct_answer as a safe fallback intent. The 1B generative classifier is only used as a load-time fallback when the LogReg model file is absent, not as a per-query confidence fallback.
Reference eval (Nextera) 96.1% on 180-query eval set. ~10ms per classification (single CPU thread).
Known failure modes When the EmbeddingGemma FT changes, the LogReg weights become invalid โ€” intent_classifier_logreg.py:13-15 warns about this coupling. Re-train both together.
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support