Text Classification
Scikit-learn
Joblib
English
intent-classification
logistic-regression
conference-talk-demo
Instructions to use thinktecture/intent-logreg-nextera with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Scikit-learn
How to use thinktecture/intent-logreg-nextera with Scikit-learn:
from huggingface_hub import hf_hub_download import joblib model = joblib.load( hf_hub_download("thinktecture/intent-logreg-nextera", "sklearn_model.joblib") ) # only load pickle files from sources you trust # read more about it here https://skops.readthedocs.io/en/stable/persistence.html - Notebooks
- Google Colab
- Kaggle
โ ๏ธ Conference talk demo โ not production weights.
This model accompanies a conference keynote on local on-device AI. Published as a reference for the fine-tuning patterns shown on stage โ not a deployable artefact. No security audit, no SLA, pinned to the talk's state.
- Source repository: thinktecture-labs/local-multi-model-agent-slm
- Threat model + out-of-scope: SECURITY.md
- Licensing details: MODEL_LICENSES.md
- All five models in the stack: Collection โ Local Multi-Model Agent โ nextera fine-tunes
LogReg Intent Classifier
| Base | scikit-learn LogisticRegression, multinomial, L2 penalty |
| License | Apache-2.0 (this repo) โ but inputs are EmbeddingGemma vectors so the Gemma Terms cover the embedding step |
| Training script | training/train_intent_logreg.py |
| Method | LogReg on FT-EmbeddingGemma's 768-dim output vectors. Held-out 90/10 split. ~2 minutes on CPU. |
| Training data | Same as Gemma3-1B intent: data/training-data/gemma3_intent_{scenario}.jsonl (re-embedded with the FT EmbeddingGemma) |
| Hardware | CPU is sufficient. Requires the FT EmbeddingGemma llama-server running on port 9092/9096 to embed training examples. |
| Intended use | Replaces the 1B generative classifier as the primary intent router. ~10ms per query (vs ~200ms for the 1B). Same accuracy on the standard eval set. |
| Out of scope | Anything that requires generation (it's a 3-way classifier). Low-confidence predictions (< 0.60 threshold, configurable in intent_classifier_logreg.py) are overridden to direct_answer as a safe fallback intent. The 1B generative classifier is only used as a load-time fallback when the LogReg model file is absent, not as a per-query confidence fallback. |
| Reference eval (Nextera) | 96.1% on 180-query eval set. ~10ms per classification (single CPU thread). |
| Known failure modes | When the EmbeddingGemma FT changes, the LogReg weights become invalid โ intent_classifier_logreg.py:13-15 warns about this coupling. Re-train both together. |
- Downloads last month
- -