SetFit with BAAI/bge-large-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-large-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1
  • 'what can your do with your mcp tool db2 schema'
  • 'is this normal?\nGC(47) Pause Young 234M->89M 12.5ms'
  • 'what units is file_size in? what format is mimetype in'
0
  • 'use mongo and postgres mcps to find test data'
  • 'browser screenshot the error page'
  • 'use the linear mcp to find my high priority issues'
2
  • 'java\npublic void process() {\n if (user != null) {\n // stuff\n }\n}\n\nclean this up'
  • "Duplicate class: 'ChunkCacheMetricsTest'"
  • "python\ndef process(data):\n result = []\n for item in data:\n if item['active']:\n result.append(item['value'])\n return result\n\nuse list comprehension"
3
  • 'review security scan then patch critical issues'
  • 'analyze failing tests, group them, fix each group'
  • '\n47 tests failing\n\ncategorize and fix each category'
5
  • 'model architecture not supported, plan alternative approach'
  • 'architecture diagram for wml'
  • 'before we move on let me ask - can we develop a set of documents that break the implementation down into individual tasks - the first one being a simple core set of function that could deliver a a single starting point. and then add layers of function in subsequent tasks to methodically build out th'
4
  • 'yes the linear mcp is finally connected'
  • 'Usually we use a custom linter config here, not the standard one'
  • 'ok it worked after i restarted the postgres mcp server'

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("tmp/best_model")
# Run inference
preds = model("lets fix the issues")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 12.1876 125
Label Training Sample Count
0 43
1 80
2 92
3 56
4 64
5 86

Training Hyperparameters

  • batch_size: (8, 8)
  • num_epochs: (2, 2)
  • max_steps: -1
  • sampling_strategy: undersampling
  • num_iterations: 10
  • body_learning_rate: (1e-05, 1e-05)
  • head_learning_rate: 0.0011800069021089953
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.006100049987809886
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: True

Training Results

Epoch Step Training Loss Validation Loss
0.0009 1 0.2444 -
0.0199 21 - 0.2323
0.0399 42 - 0.2335
0.0475 50 0.2389 -
0.0598 63 - 0.2261
0.0798 84 - 0.2224
0.0950 100 0.2256 -
0.0997 105 - 0.2112
0.1197 126 - 0.2038
0.1396 147 - 0.1854
0.1425 150 0.1988 -
0.1595 168 - 0.1775
0.1795 189 - 0.1690
0.1899 200 0.1625 -
0.1994 210 - 0.1679
0.2194 231 - 0.1472
0.2374 250 0.1172 -
0.2393 252 - 0.1511
0.2593 273 - 0.1463
0.2792 294 - 0.1449
0.2849 300 0.092 -
0.2991 315 - 0.1410
0.3191 336 - 0.1215
0.3324 350 0.0696 -
0.3390 357 - 0.1232
0.3590 378 - 0.1269
0.3789 399 - 0.1346
0.3799 400 0.0266 -
0.3989 420 - 0.1315
0.4188 441 - 0.1296

Framework Versions

  • Python: 3.12.12
  • SetFit: 1.1.3
  • Sentence Transformers: 3.4.1
  • Transformers: 4.51.3
  • PyTorch: 2.11.0+cu130
  • Datasets: 4.8.4
  • Tokenizers: 0.21.4

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
19
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for linkyeu/intent-routing-arch-bge-large

Finetuned
(72)
this model

Paper for linkyeu/intent-routing-arch-bge-large