metadata
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:127
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: What is the difference between traditional programming and ML?
sentences:
- >-
Over the past few years, the field of ML has advanced rapidly,
especially in the area of Natural Language Processing (NLP)—the ability
of machines to understand and generate human language. At the forefront
of this progress are Large Language Models (LLMs), such as OpenAI’s GPT
(Generative Pre-trained Transformer), Google’s PaLM, and Meta’s LLaMA
- >-
. For example, integrating an LLM into a customer support chatbot might
involve connecting it to a company’s internal knowledge base, enabling
it to answer customer questions using accurate, up-to-date information.
- >-
A major subset of AI is Machine Learning (ML), which involves algorithms
that learn from data rather than being explicitly programmed. Instead of
writing detailed instructions for every task, ML models find patterns in
large datasets and use these patterns to make predictions or decisions
- source_sentence: >-
What is one of the tasks mentioned that involves creating new written
content?
sentences:
- >-
In summary, AI and ML form the foundation for intelligent automation,
while LLMs represent a breakthrough in language understanding and
generation. Integrating these models into real-world systems unlocks
practical value, turning raw intelligence into tangible solutions
- >-
8. Security and Compliance Integrations
Some organizations are integrating LLMs to detect anomalies in text
communications (e.g., phishing detection or policy violations). LLMs can
analyze language usage and flag potentially suspicious behavior more
flexibly than keyword-based filters.
Challenges in LLM Integration
Despite their promise, integrating LLMs comes with challenges:
- >-
. These include text generation, summarization, translation, question
answering, code generation, and more.
- source_sentence: What is one of the components mentioned alongside AI?
sentences:
- >-
2. Search Engines and Semantic Search
Traditional keyword-based search systems are being enhanced or replaced
by semantic search, where LLMs understand the meaning behind queries.
Instead of just matching words, they interpret intent.
- >-
For example, e-commerce websites can deploy LLM-powered assistants to
help customers find products, track orders, or get personalized
recommendations—much more effectively than traditional rule-based bots.
- Introduction to AI, Machine Learning, LLMs, and Their Integration
- source_sentence: >-
What is required to provide intelligent features within broader
applications?
sentences:
- >-
. For instance, a spam filter doesn’t just block emails with specific
keywords—it learns from thousands of examples what spam typically looks
like.
- >-
The Rise of LLM Integrations
While LLMs are powerful on their own, their true potential is unlocked
through integration—connecting these models with other software,
services, or systems to provide intelligent features within broader
applications.
Here are some key ways LLMs are being integrated into the digital world:
- >-
For instance, in a document management system, a user might type
"policies about sick leave", and the system—integrated with an LLM—could
retrieve documents discussing "medical leave", "employee absence", and
"illness policies", even if those exact words weren’t used.
- source_sentence: What type of dialogues can LLMs simulate?
sentences:
- >-
Companies are also experimenting with Retrieval-Augmented Generation
(RAG)—a technique where LLMs are paired with document databases (e.g.,
vector stores like Supabase, Pinecone, or Weaviate) to answer questions
with enterprise-specific knowledge.
- >-
. For example, integrating an LLM into a customer support chatbot might
involve connecting it to a company’s internal knowledge base, enabling
it to answer customer questions using accurate, up-to-date information.
- >-
5. Education and Learning Platforms
Educational tools like Khanmigo (from Khan Academy) and other tutoring
platforms are leveraging LLMs to provide real-time help to students.
LLMs can break down complex topics, provide feedback on writing, and
simulate Socratic-style dialogues.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: Fine-tuned with [QuicKB](https://github.com/ALucek/QuicKB)
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.6666666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6666666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2666666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000007
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000003
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6666666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8310827786456928
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7766666666666667
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7766666666666667
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.6666666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8666666666666667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6666666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2666666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17333333333333337
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000003
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6666666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8666666666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8203966331432972
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7651851851851852
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7651851851851852
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.6666666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8666666666666667
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6666666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28888888888888886
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000007
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000003
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6666666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8666666666666667
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8357043414408
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7822222222222223
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7822222222222223
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.5333333333333333
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7333333333333333
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9333333333333333
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5333333333333333
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2444444444444445
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16000000000000003
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09333333333333335
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5333333333333333
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7333333333333333
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9333333333333333
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7203966331432973
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6540740740740741
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6592022792022793
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.4666666666666667
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6666666666666666
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8666666666666667
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4666666666666667
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22222222222222224
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16000000000000003
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08666666666666668
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4666666666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6666666666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8666666666666667
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6507228370099043
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5822222222222223
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.58890559732665
name: Cosine Map@100
Fine-tuned with QuicKB
This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: nomic-ai/modernbert-embed-base
- Maximum Sequence Length: 1024 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Nuf-hugginface/modernbert-embed-quickb")
# Run inference
sentences = [
'What type of dialogues can LLMs simulate?',
'5. Education and Learning Platforms\nEducational tools like Khanmigo (from Khan Academy) and other tutoring platforms are leveraging LLMs to provide real-time help to students. LLMs can break down complex topics, provide feedback on writing, and simulate Socratic-style dialogues.',
'. For example, integrating an LLM into a customer support chatbot might involve connecting it to a company’s internal knowledge base, enabling it to answer customer questions using accurate, up-to-date information.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Datasets:
dim_768,dim_512,dim_256,dim_128anddim_64 - Evaluated with
InformationRetrievalEvaluator
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|---|---|---|---|---|---|
| cosine_accuracy@1 | 0.6667 | 0.6667 | 0.6667 | 0.5333 | 0.4667 |
| cosine_accuracy@3 | 0.8 | 0.8 | 0.8667 | 0.7333 | 0.6667 |
| cosine_accuracy@5 | 1.0 | 0.8667 | 1.0 | 0.8 | 0.8 |
| cosine_accuracy@10 | 1.0 | 1.0 | 1.0 | 0.9333 | 0.8667 |
| cosine_precision@1 | 0.6667 | 0.6667 | 0.6667 | 0.5333 | 0.4667 |
| cosine_precision@3 | 0.2667 | 0.2667 | 0.2889 | 0.2444 | 0.2222 |
| cosine_precision@5 | 0.2 | 0.1733 | 0.2 | 0.16 | 0.16 |
| cosine_precision@10 | 0.1 | 0.1 | 0.1 | 0.0933 | 0.0867 |
| cosine_recall@1 | 0.6667 | 0.6667 | 0.6667 | 0.5333 | 0.4667 |
| cosine_recall@3 | 0.8 | 0.8 | 0.8667 | 0.7333 | 0.6667 |
| cosine_recall@5 | 1.0 | 0.8667 | 1.0 | 0.8 | 0.8 |
| cosine_recall@10 | 1.0 | 1.0 | 1.0 | 0.9333 | 0.8667 |
| cosine_ndcg@10 | 0.8311 | 0.8204 | 0.8357 | 0.7204 | 0.6507 |
| cosine_mrr@10 | 0.7767 | 0.7652 | 0.7822 | 0.6541 | 0.5822 |
| cosine_map@100 | 0.7767 | 0.7652 | 0.7822 | 0.6592 | 0.5889 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 127 training samples
- Columns:
anchorandpositive - Approximate statistics based on the first 127 samples:
anchor positive type string string details - min: 8 tokens
- mean: 13.28 tokens
- max: 25 tokens
- min: 13 tokens
- mean: 53.34 tokens
- max: 86 tokens
- Samples:
anchor positive What task mentioned is related to providing answers to inquiries?. These include text generation, summarization, translation, question answering, code generation, and more.What do LLMs learn to work effectively?LLMs work by learning statistical relationships between words and phrases, allowing them to predict and generate language that feels natural. The power of these models lies not only in their size but also in the diversity of tasks they can perform with little to no task-specific trainingIn which industries is the generalization ability considered useful?. This generalization ability makes them incredibly useful across industries—from customer service and education to software development and healthcare. - Loss:
MatryoshkaLosswith these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: epochper_device_train_batch_size: 4gradient_accumulation_steps: 8learning_rate: 2e-05num_train_epochs: 4lr_scheduler_type: cosinewarmup_ratio: 0.1tf32: Falseload_best_model_at_end: Truebatch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: epochprediction_loss_only: Trueper_device_train_batch_size: 4per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 8eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 4max_steps: -1lr_scheduler_type: cosinelr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Falselocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportional
Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|---|---|---|---|---|---|---|---|
| 1.0 | 4 | - | 0.7790 | 0.7120 | 0.7474 | 0.6321 | 0.5684 |
| 2.0 | 8 | - | 0.8275 | 0.7966 | 0.8091 | 0.6904 | 0.6102 |
| 2.5 | 10 | 13.4453 | - | - | - | - | - |
| 3.0 | 12 | - | 0.8311 | 0.8204 | 0.8357 | 0.7178 | 0.6557 |
| 4.0 | 16 | - | 0.8311 | 0.8204 | 0.8357 | 0.7204 | 0.6507 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.12.6
- Sentence Transformers: 3.4.0
- Transformers: 4.48.1
- PyTorch: 2.5.1+cpu
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}