Text Classification
Scikit-learn
Joblib
English
intent-classification
logistic-regression
conference-talk-demo
Instructions to use thinktecture/intent-logreg-nextera with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Scikit-learn
How to use thinktecture/intent-logreg-nextera with Scikit-learn:
from huggingface_hub import hf_hub_download import joblib model = joblib.load( hf_hub_download("thinktecture/intent-logreg-nextera", "sklearn_model.joblib") ) # only load pickle files from sources you trust # read more about it here https://skops.readthedocs.io/en/stable/persistence.html - Notebooks
- Google Colab
- Kaggle
Add conference-demo notice + base-model + license framing
Browse files
README.md
ADDED
|
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
library_name: transformers
|
| 4 |
+
tags:
|
| 5 |
+
- conference-demo
|
| 6 |
+
- local-ai
|
| 7 |
+
- fine-tuning
|
| 8 |
+
- gemma
|
| 9 |
+
- qwen
|
| 10 |
+
- thinktecture
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# LogReg intent classifier (on top of EmbeddingGemma — Nextera demo)
|
| 14 |
+
|
| 15 |
+
> ⚠️ **Conference demo weights — not production-ready.**
|
| 16 |
+
>
|
| 17 |
+
> This model accompanies the *AI Goes Local — Why the Future of Intelligent
|
| 18 |
+
> Software Runs On-Device* keynote at **SDD 2026 (London, 2026-05-12)**.
|
| 19 |
+
> It is published as a reference for the fine-tuning patterns shown on stage,
|
| 20 |
+
> **not** as a deployable artefact. No security audit, no SLA, pinned to the
|
| 21 |
+
> talk's state.
|
| 22 |
+
>
|
| 23 |
+
> Source repository:
|
| 24 |
+
> [thinktecture-labs/local-multi-model-agent-slm](https://github.com/thinktecture-labs/local-multi-model-agent-slm)
|
| 25 |
+
> Threat model + out-of-scope items:
|
| 26 |
+
> [`SECURITY.md`](https://github.com/thinktecture-labs/local-multi-model-agent-slm/blob/main/SECURITY.md)
|
| 27 |
+
|
| 28 |
+
---
|
| 29 |
+
|
| 30 |
+
## What this is
|
| 31 |
+
|
| 32 |
+
Fine-tune of [`google/embeddinggemma-300m (via fine-tuned embeddings)`](https://huggingface.co/thinktecture/embeddinggemma-300m-ft-nextera) for the demo's reference scenario
|
| 33 |
+
("Nextera" — a fully synthetic SaaS analytics product invented for the keynote).
|
| 34 |
+
|
| 35 |
+
See [`finetune/MODEL_CARDS.md#LogReg`](https://github.com/thinktecture-labs/local-multi-model-agent-slm/blob/main/finetune/MODEL_CARDS.md#logreg)
|
| 36 |
+
in the source repository for the full card — training data, hyperparameters,
|
| 37 |
+
eval scores, known failure modes.
|
| 38 |
+
|
| 39 |
+
## License
|
| 40 |
+
|
| 41 |
+
This artefact is a derivative of [`google/embeddinggemma-300m (via fine-tuned embeddings)`](https://huggingface.co/thinktecture/embeddinggemma-300m-ft-nextera) and inherits
|
| 42 |
+
its license: **Apache-2.0 (this artifact) + Gemma Terms (for the embedding step)**. See
|
| 43 |
+
[`finetune/MODEL_LICENSES.md`](https://github.com/thinktecture-labs/local-multi-model-agent-slm/blob/main/finetune/MODEL_LICENSES.md)
|
| 44 |
+
for the full per-model license summary.
|
| 45 |
+
|
| 46 |
+
## Collection
|
| 47 |
+
|
| 48 |
+
This model is part of the
|
| 49 |
+
[Local Multi-Model Agent — nextera fine-tunes](https://huggingface.co/collections/thinktecture/local-multi-model-agent-nextera-fine-tunes-6a04a8ff2a40e5696f3c2f18)
|
| 50 |
+
collection — five models in the production stack: intent (Gemma 1B), retrieval
|
| 51 |
+
(EmbeddingGemma), tool calling (Qwen 4B), RAG synthesis (Gemma 4B), and the
|
| 52 |
+
LogReg intent classifier.
|