File size: 2,702 Bytes
0960b3e
7f3c126
0960b3e
7f3c126
7f6bd6c
 
 
 
 
 
 
0960b3e
 
0ff0247
 
 
343c36d
 
 
 
 
 
 
 
7f3c126
 
 
 
 
6056528
7f6bd6c
7f3c126
 
 
 
7f6bd6c
7f3c126
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: apache-2.0
tags:
- intent-classification
- text-classification
- logistic-regression
- sklearn
- conference-talk-demo
language:
- en
library_name: sklearn
---

**⚠️ Conference talk demo — not production weights.**

This model accompanies a conference keynote on local on-device AI. Published as a reference for the fine-tuning patterns shown on stage — **not** a deployable artefact. No security audit, no SLA, pinned to the talk's state.

- Source repository: [thinktecture-labs/local-multi-model-agent-slm](https://github.com/thinktecture-labs/local-multi-model-agent-slm)
- Threat model + out-of-scope: [SECURITY.md](https://github.com/thinktecture-labs/local-multi-model-agent-slm/blob/main/SECURITY.md)
- Licensing details: [MODEL_LICENSES.md](https://github.com/thinktecture-labs/local-multi-model-agent-slm/blob/main/finetune/MODEL_LICENSES.md)
- All five models in the stack: [Collection — Local Multi-Model Agent — nextera fine-tunes](https://huggingface.co/collections/thinktecture/local-multi-model-agent-nextera-fine-tunes-6a04a8ff2a40e5696f3c2f18)

---

## LogReg Intent Classifier

| | |
|---|---|
| **Base** | scikit-learn `LogisticRegression`, multinomial, L2 penalty |
| **License** | Apache-2.0 (this repo) — but inputs are EmbeddingGemma vectors so the [Gemma Terms](MODEL_LICENSES.md) cover the embedding step |
| **Training script** | [`training/train_intent_logreg.py`](../training/train_intent_logreg.py) |
| **Method** | LogReg on FT-EmbeddingGemma's 768-dim output vectors. Held-out 90/10 split. ~2 minutes on CPU. |
| **Training data** | Same as Gemma3-1B intent: `data/training-data/gemma3_intent_{scenario}.jsonl` (re-embedded with the FT EmbeddingGemma) |
| **Hardware** | CPU is sufficient. Requires the FT EmbeddingGemma llama-server running on port 9092/9096 to embed training examples. |
| **Intended use** | Replaces the 1B generative classifier as the primary intent router. ~10ms per query (vs ~200ms for the 1B). Same accuracy on the standard eval set. |
| **Out of scope** | Anything that requires generation (it's a 3-way classifier). Low-confidence predictions (< 0.60 threshold, configurable in `intent_classifier_logreg.py`) are overridden to `direct_answer` as a safe fallback intent. The 1B generative classifier is only used as a load-time fallback when the LogReg model file is absent, not as a per-query confidence fallback. |
| **Reference eval (Nextera)** | 96.1% on 180-query eval set. ~10ms per classification (single CPU thread). |
| **Known failure modes** | When the EmbeddingGemma FT changes, the LogReg weights become invalid — `intent_classifier_logreg.py:13-15` warns about this coupling. Re-train both together. |