---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:127
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: What is the difference between traditional programming and ML?
sentences:
- Over the past few years, the field of ML has advanced rapidly, especially in the
area of Natural Language Processing (NLP)—the ability of machines to understand
and generate human language. At the forefront of this progress are Large Language
Models (LLMs), such as OpenAI’s GPT (Generative Pre-trained Transformer), Google’s
PaLM, and Meta’s LLaMA
- . For example, integrating an LLM into a customer support chatbot might involve
connecting it to a company’s internal knowledge base, enabling it to answer customer
questions using accurate, up-to-date information.
- A major subset of AI is Machine Learning (ML), which involves algorithms that
learn from data rather than being explicitly programmed. Instead of writing detailed
instructions for every task, ML models find patterns in large datasets and use
these patterns to make predictions or decisions
- source_sentence: What is one of the tasks mentioned that involves creating new written
content?
sentences:
- In summary, AI and ML form the foundation for intelligent automation, while LLMs
represent a breakthrough in language understanding and generation. Integrating
these models into real-world systems unlocks practical value, turning raw intelligence
into tangible solutions
- '8. Security and Compliance Integrations
Some organizations are integrating LLMs to detect anomalies in text communications
(e.g., phishing detection or policy violations). LLMs can analyze language usage
and flag potentially suspicious behavior more flexibly than keyword-based filters.
Challenges in LLM Integration
Despite their promise, integrating LLMs comes with challenges:'
- . These include text generation, summarization, translation, question answering,
code generation, and more.
- source_sentence: What is one of the components mentioned alongside AI?
sentences:
- '2. Search Engines and Semantic Search
Traditional keyword-based search systems are being enhanced or replaced by semantic
search, where LLMs understand the meaning behind queries. Instead of just matching
words, they interpret intent.'
- For example, e-commerce websites can deploy LLM-powered assistants to help customers
find products, track orders, or get personalized recommendations—much more effectively
than traditional rule-based bots.
- Introduction to AI, Machine Learning, LLMs, and Their Integration
- source_sentence: What is required to provide intelligent features within broader
applications?
sentences:
- . For instance, a spam filter doesn’t just block emails with specific keywords—it
learns from thousands of examples what spam typically looks like.
- 'The Rise of LLM Integrations
While LLMs are powerful on their own, their true potential is unlocked through
integration—connecting these models with other software, services, or systems
to provide intelligent features within broader applications.
Here are some key ways LLMs are being integrated into the digital world:'
- For instance, in a document management system, a user might type "policies about
sick leave", and the system—integrated with an LLM—could retrieve documents discussing
"medical leave", "employee absence", and "illness policies", even if those exact
words weren’t used.
- source_sentence: What type of dialogues can LLMs simulate?
sentences:
- Companies are also experimenting with Retrieval-Augmented Generation (RAG)—a technique
where LLMs are paired with document databases (e.g., vector stores like Supabase,
Pinecone, or Weaviate) to answer questions with enterprise-specific knowledge.
- . For example, integrating an LLM into a customer support chatbot might involve
connecting it to a company’s internal knowledge base, enabling it to answer customer
questions using accurate, up-to-date information.
- '5. Education and Learning Platforms
Educational tools like Khanmigo (from Khan Academy) and other tutoring platforms
are leveraging LLMs to provide real-time help to students. LLMs can break down
complex topics, provide feedback on writing, and simulate Socratic-style dialogues.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: Fine-tuned with [QuicKB](https://github.com/ALucek/QuicKB)
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.6666666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6666666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2666666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000007
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000003
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6666666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8310827786456928
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7766666666666667
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7766666666666667
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.6666666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8666666666666667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6666666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2666666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17333333333333337
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000003
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6666666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8666666666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8203966331432972
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7651851851851852
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7651851851851852
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.6666666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8666666666666667
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6666666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28888888888888886
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000007
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000003
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6666666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8666666666666667
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8357043414408
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7822222222222223
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7822222222222223
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.5333333333333333
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7333333333333333
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9333333333333333
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5333333333333333
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2444444444444445
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16000000000000003
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09333333333333335
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5333333333333333
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7333333333333333
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9333333333333333
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7203966331432973
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6540740740740741
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6592022792022793
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.4666666666666667
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6666666666666666
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8666666666666667
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4666666666666667
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22222222222222224
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16000000000000003
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08666666666666668
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4666666666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6666666666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8666666666666667
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6507228370099043
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5822222222222223
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.58890559732665
name: Cosine Map@100
---
# Fine-tuned with [QuicKB](https://github.com/ALucek/QuicKB)
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base)
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Nuf-hugginface/modernbert-embed-quickb")
# Run inference
sentences = [
'What type of dialogues can LLMs simulate?',
'5. Education and Learning Platforms\nEducational tools like Khanmigo (from Khan Academy) and other tutoring platforms are leveraging LLMs to provide real-time help to students. LLMs can break down complex topics, provide feedback on writing, and simulate Socratic-style dialogues.',
'. For example, integrating an LLM into a customer support chatbot might involve connecting it to a company’s internal knowledge base, enabling it to answer customer questions using accurate, up-to-date information.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [InformationRetrievalEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.6667 | 0.6667 | 0.6667 | 0.5333 | 0.4667 |
| cosine_accuracy@3 | 0.8 | 0.8 | 0.8667 | 0.7333 | 0.6667 |
| cosine_accuracy@5 | 1.0 | 0.8667 | 1.0 | 0.8 | 0.8 |
| cosine_accuracy@10 | 1.0 | 1.0 | 1.0 | 0.9333 | 0.8667 |
| cosine_precision@1 | 0.6667 | 0.6667 | 0.6667 | 0.5333 | 0.4667 |
| cosine_precision@3 | 0.2667 | 0.2667 | 0.2889 | 0.2444 | 0.2222 |
| cosine_precision@5 | 0.2 | 0.1733 | 0.2 | 0.16 | 0.16 |
| cosine_precision@10 | 0.1 | 0.1 | 0.1 | 0.0933 | 0.0867 |
| cosine_recall@1 | 0.6667 | 0.6667 | 0.6667 | 0.5333 | 0.4667 |
| cosine_recall@3 | 0.8 | 0.8 | 0.8667 | 0.7333 | 0.6667 |
| cosine_recall@5 | 1.0 | 0.8667 | 1.0 | 0.8 | 0.8 |
| cosine_recall@10 | 1.0 | 1.0 | 1.0 | 0.9333 | 0.8667 |
| **cosine_ndcg@10** | **0.8311** | **0.8204** | **0.8357** | **0.7204** | **0.6507** |
| cosine_mrr@10 | 0.7767 | 0.7652 | 0.7822 | 0.6541 | 0.5822 |
| cosine_map@100 | 0.7767 | 0.7652 | 0.7822 | 0.6592 | 0.5889 |
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 127 training samples
* Columns: anchor and positive
* Approximate statistics based on the first 127 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details |
What task mentioned is related to providing answers to inquiries? | . These include text generation, summarization, translation, question answering, code generation, and more. |
| What do LLMs learn to work effectively? | LLMs work by learning statistical relationships between words and phrases, allowing them to predict and generate language that feels natural. The power of these models lies not only in their size but also in the diversity of tasks they can perform with little to no task-specific training |
| In which industries is the generalization ability considered useful? | . This generalization ability makes them incredibly useful across industries—from customer service and education to software development and healthcare. |
* Loss: [MatryoshkaLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 4
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `tf32`: False
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters