SentenceTransformer
This is a sentence-transformers model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 384 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'If the weights are independent and the prior is taken as Gaussian, N (0, 1/2λ)\n\nthe MAP estimate minimizes the augmented error function\n\nwhere E is the usual classification or regression error (negative log likelihood).',
'This approach of removing unnecessary parameters is known as ridge regression in statistics.',
'These architectures are used across supervised, unsupervised, and reinforcement learning.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.4685, 0.1523],
# [0.4685, 1.0000, 0.1570],
# [0.1523, 0.1570, 1.0000]])
Evaluation
Metrics
Semantic Similarity
- Dataset:
val - Evaluated with
EmbeddingSimilarityEvaluator
| Metric | Value |
|---|---|
| pearson_cosine | nan |
| spearman_cosine | nan |
Training Details
Training Dataset
Training Data
The model was fine-tuned using 17 reference books in Data Science and Machine Learning, including:
All source books were preprocessed using GROBID, an open-source tool for extracting and structuring text from PDF documents.
The raw PDF files were converted into structured text, segmented into sentences, and cleaned before being used for training.
This ensured consistent formatting and reliable sentence boundaries across the dataset.
- Aßenmacher, Matthias. Multimodal Deep Learning. Self-published, 2023.
- Bertsekas, Dimitri P. A Course in Reinforcement Learning. Arizona State University.
- Boykis, Vicki. What are Embeddings. Self-published, 2023.
- Bruce, Peter, and Andrew Bruce. Practical Statistics for Data Scientists: 50 Essential Concepts. O’Reilly Media, 2017.
- Daumé III, Hal. A Course in Machine Learning. Self-published.
- Deisenroth, Marc Peter, A. Aldo Faisal, and Cheng Soon Ong. Mathematics for Machine Learning. Cambridge University Press, 2020.
- Devlin, Hannah, Guo Kunin, Xiang Tian. Seeing Theory. Self-published.
- Gutmann, Michael U. Pen & Paper: Exercises in Machine Learning. Self-published.
- Jung, Alexander. Machine Learning: The Basics. Springer, 2022.
- Langr, Jakub, and Vladimir Bok. Deep Learning with Generative Adversarial Networks. Manning Publications, 2019.
- MacKay, David J.C. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003.
- Montgomery, Douglas C., Cheryl L. Jennings, and Murat Kulahci. Introduction to Time Series Analysis and Forecasting. 2nd Edition, Wiley, 2015.
- Nilsson, Nils J. Introduction to Machine Learning: An Early Draft of a Proposed Textbook. Stanford University, 1996.
- Prince, Simon J.D. Understanding Deep Learning. Draft Edition, 2024.
- Shashua, Amnon. Introduction to Machine Learning. The Hebrew University of Jerusalem, 2008.
- Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. 2nd Edition, MIT Press, 2018.
- Alpaydin, Ethem. Introduction to Machine Learning. 3rd Edition, MIT Press, 2014.
⚠️ Note: Due to copyright restrictions, the full text of these books is not included in this repository. Only the fine-tuned model weights are shared.
Unnamed Dataset
- Size: 134,200 training samples
- Columns:
sentence_0andsentence_1 - Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 type string string details - min: 8 tokens
- mean: 39.42 tokens
- max: 256 tokens
- min: 7 tokens
- mean: 41.22 tokens
- max: 256 tokens
- Samples:
sentence_0 sentence_1 This equation is somewhat similar to what you have seen before (as a high-level simplification of equation 5.1), with some important differences.The critic is trying to estimate the earth mover's distance, and looks for the maximum difference between the real (first term) and the generated (second term) distribution under different (valid) parametrizations of the f w function.[2, p.173] Sketch the mutual information for this channel as a function of the input distribution p. Pick a convenient two-dimensional representation of p.
The optimization routine must therefore take account of the possibility that, as we go up hill on I(X; Y ), we may run into the inequality constraints p i ≥ 0.I(X; Y ) is a convex function of the channel parameters.• Derive the AdaBoost algorithm.• Understand the relationship between boosting decision stumps and linear classification. - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size: 16per_device_eval_batch_size: 16num_train_epochs: 6fp16: Truemulti_dataset_batch_sampler: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 16per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 6max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robinrouter_mapping: {}learning_rate_mapping: {}
Training Logs
📈 Loss Curve (per epoch)
| Epoch | Loss | Grad Norm |
|---|---|---|
| 0.06 | 1.5921 | 19.76 |
| 0.30 | 1.2517 | 13.44 |
| 0.60 | 1.0856 | 13.96 |
| 0.95 | 0.9268 | 13.22 |
| 1.25 | 0.7569 | 14.07 |
| 1.61 | 0.6757 | 11.21 |
| 1.97 | 0.6409 | 12.76 |
| 2.32 | 0.5111 | 20.53 |
| 2.74 | 0.5059 | 14.88 |
| 3.10 | 0.3880 | 9.56 |
| 3.46 | 0.3792 | 9.78 |
| 3.87 | 0.3750 | inf |
| 4.11 | 0.3345 | 11.03 |
| 4.47 | 0.3271 | 13.21 |
| 4.83 | 0.3064 | 11.37 |
| 5.07 | 0.2752 | 19.23 |
| 5.36 | 0.2740 | 15.45 |
| 5.72 | 0.2773 | 6.55 |
| 5.96 | 0.2710 | 19.79 |
| 6.00 | 0.5610 | — (final train loss) |
Framework Versions
- Python: 3.11.7
- Sentence Transformers: 5.1.1
- Transformers: 4.57.3
- PyTorch: 2.5.1+cu121
- Accelerate: 1.12.0
- Datasets: 4.4.1
- Tokenizers: 0.22.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
If you use this model, please cite:
@misc{aghakhani2025synergsticrag,
author = {Danial Aghakhani Zadeh},
title = {Fine-tuned all-MiniLM-L6-v2 for Data Science RAG},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/DigitalAsocial/all-MiniLM-L6-v2-ds-rag-s}}
}
- Downloads last month
- 2
Model tree for DigitalAsocial/all-MiniLM-L6-v2-ds-rag-s
Base model
sentence-transformers/all-MiniLM-L6-v2Dataset used to train DigitalAsocial/all-MiniLM-L6-v2-ds-rag-s
Papers for DigitalAsocial/all-MiniLM-L6-v2-ds-rag-s
Efficient Natural Language Response Suggestion for Smart Reply
Evaluation results
- Pearson Cosine on valself-reportednull
- Spearman Cosine on valself-reportednull