Text Classification
Scikit-learn
Joblib
English
intent-classification
logistic-regression
conference-talk-demo
Instructions to use thinktecture/intent-logreg-nextera with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Scikit-learn
How to use thinktecture/intent-logreg-nextera with Scikit-learn:
from huggingface_hub import hf_hub_download import joblib model = joblib.load( hf_hub_download("thinktecture/intent-logreg-nextera", "sklearn_model.joblib") ) # only load pickle files from sources you trust # read more about it here https://skops.readthedocs.io/en/stable/persistence.html - Notebooks
- Google Colab
- Kaggle
Prepend Conference-talk-demo disclaimer + reference links
Browse files
README.md
CHANGED
|
@@ -11,6 +11,17 @@ language:
|
|
| 11 |
library_name: sklearn
|
| 12 |
---
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
## LogReg Intent Classifier
|
| 15 |
|
| 16 |
| | |
|
|
@@ -25,9 +36,3 @@ library_name: sklearn
|
|
| 25 |
| **Out of scope** | Anything that requires generation (it's a 3-way classifier). Low-confidence predictions (< 0.60 threshold, configurable in `intent_classifier_logreg.py`) are overridden to `direct_answer` as a safe fallback intent. The 1B generative classifier is only used as a load-time fallback when the LogReg model file is absent, not as a per-query confidence fallback. |
|
| 26 |
| **Reference eval (Nextera)** | 96.1% on 180-query eval set. ~10ms per classification (single CPU thread). |
|
| 27 |
| **Known failure modes** | When the EmbeddingGemma FT changes, the LogReg weights become invalid — `intent_classifier_logreg.py:13-15` warns about this coupling. Re-train both together. |
|
| 28 |
-
|
| 29 |
-
---
|
| 30 |
-
|
| 31 |
-
Source repository: https://github.com/thinktecture-labs/local-multi-model-agent-slm
|
| 32 |
-
|
| 33 |
-
Generated from `finetune/MODEL_CARDS.md` — see source repo for the full pipeline + reproducibility instructions.
|
|
|
|
| 11 |
library_name: sklearn
|
| 12 |
---
|
| 13 |
|
| 14 |
+
> **⚠️ Conference talk demo — not production weights.**
|
| 15 |
+
>
|
| 16 |
+
> This model accompanies a conference keynote on local on-device AI. Published as a reference for the fine-tuning patterns shown on stage — **not** a deployable artefact. No security audit, no SLA, pinned to the talk's state.
|
| 17 |
+
|
| 18 |
+
- Source repository: [thinktecture-labs/local-multi-model-agent-slm](https://github.com/thinktecture-labs/local-multi-model-agent-slm)
|
| 19 |
+
- Threat model + out-of-scope: [SECURITY.md](https://github.com/thinktecture-labs/local-multi-model-agent-slm/blob/main/SECURITY.md)
|
| 20 |
+
- Licensing details: [MODEL_LICENSES.md](https://github.com/thinktecture-labs/local-multi-model-agent-slm/blob/main/finetune/MODEL_LICENSES.md)
|
| 21 |
+
- All five models in the stack: [Collection — Local Multi-Model Agent — nextera fine-tunes](https://huggingface.co/collections/thinktecture/local-multi-model-agent-nextera-fine-tunes-6a04a8ff2a40e5696f3c2f18)
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
## LogReg Intent Classifier
|
| 26 |
|
| 27 |
| | |
|
|
|
|
| 36 |
| **Out of scope** | Anything that requires generation (it's a 3-way classifier). Low-confidence predictions (< 0.60 threshold, configurable in `intent_classifier_logreg.py`) are overridden to `direct_answer` as a safe fallback intent. The 1B generative classifier is only used as a load-time fallback when the LogReg model file is absent, not as a per-query confidence fallback. |
|
| 37 |
| **Reference eval (Nextera)** | 96.1% on 180-query eval set. ~10ms per classification (single CPU thread). |
|
| 38 |
| **Known failure modes** | When the EmbeddingGemma FT changes, the LogReg weights become invalid — `intent_classifier_logreg.py:13-15` warns about this coupling. Re-train both together. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|