SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
- Language: en
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Husain/ramdam_fingerprint_embedding_model")
# Run inference
sentences = [
'A cat is on a robot.',
'A man is eating bread.',
'A woman is pouring eyes into a bowl.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Semantic Similarity
- Dataset:
sts-dev - Evaluated with
EmbeddingSimilarityEvaluator
| Metric | Value |
|---|---|
| pearson_cosine | 0.9187 |
| spearman_cosine | 0.9276 |
| pearson_manhattan | 0.8991 |
| spearman_manhattan | 0.9321 |
| pearson_euclidean | 0.9015 |
| spearman_euclidean | 0.929 |
| pearson_dot | 0.8789 |
| spearman_dot | 0.8957 |
| pearson_max | 0.9187 |
| spearman_max | 0.9321 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 101 training samples
- Columns:
sentence1,sentence2, andscore - Approximate statistics based on the first 101 samples:
sentence1 sentence2 score type string string float details - min: 7 tokens
- mean: 9.44 tokens
- max: 14 tokens
- min: 3 tokens
- mean: 9.46 tokens
- max: 15 tokens
- min: 0.1
- mean: 0.66
- max: 1.0
- Samples:
sentence1 sentence2 score A plane is taking off.An air plane is taking off.1.0A man is playing a large flute.A man is playing a flute.0.76A man is spreading shreded cheese on a pizza.A man is spreading shredded cheese on an uncooked pizza.0.76 - Loss:
CoSENTLosswith these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Evaluation Dataset
stsb
- Dataset: stsb at ab7a5ac
- Size: 1,500 evaluation samples
- Columns:
sentence1,sentence2, andscore - Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 6 tokens
- mean: 9.35 tokens
- max: 13 tokens
- min: 7 tokens
- mean: 9.9 tokens
- max: 16 tokens
- min: 0.0
- mean: 0.39
- max: 1.0
- Samples:
sentence1 sentence2 score A woman is riding on a horse.A man is turning over tables in anger.0.0A man is screwing wood to a wall.A man is giving a woman a massage.0.04A girl is playing a flute.A girl plays a wind instrument.0.64 - Loss:
CoSENTLosswith these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepslearning_rate: 2e-05num_train_epochs: 10warmup_ratio: 0.1save_only_model: Trueseed: 33fp16: Trueload_best_model_at_end: True
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 8per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 10max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Truerestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 33data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Falsehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseeval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseeval_use_gather_object: Falsebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportional
Training Logs
| Epoch | Step | loss | sts-dev_spearman_cosine |
|---|---|---|---|
| 0.1538 | 2 | 4.4641 | 0.9366 |
| 0.3077 | 4 | 4.4652 | 0.9366 |
| 0.4615 | 6 | 4.4719 | 0.9366 |
| 0.6154 | 8 | 4.4903 | 0.9366 |
| 0.7692 | 10 | 4.5264 | 0.9373 |
| 0.9231 | 12 | 4.5954 | 0.9339 |
| 1.0769 | 14 | 4.6832 | 0.9328 |
| 1.2308 | 16 | 4.7534 | 0.9289 |
| 1.3846 | 18 | 4.8155 | 0.9281 |
| 1.5385 | 20 | 4.8788 | 0.9269 |
| 1.6923 | 22 | 4.9350 | 0.9272 |
| 1.8462 | 24 | 4.9789 | 0.9239 |
| 2.0 | 26 | 5.0132 | 0.9230 |
| 2.1538 | 28 | 5.0636 | 0.9237 |
| 2.3077 | 30 | 5.1068 | 0.9202 |
| 2.4615 | 32 | 5.1460 | 0.9172 |
| 2.6154 | 34 | 5.1602 | 0.9164 |
| 2.7692 | 36 | 5.1493 | 0.9210 |
| 2.9231 | 38 | 5.1399 | 0.9200 |
| 3.0769 | 40 | 5.1342 | 0.9235 |
| 3.2308 | 42 | 5.1413 | 0.9258 |
| 3.3846 | 44 | 5.1440 | 0.9271 |
| 3.5385 | 46 | 5.1583 | 0.9311 |
| 3.6923 | 48 | 5.1664 | 0.9293 |
| 3.8462 | 50 | 5.1682 | 0.9293 |
| 4.0 | 52 | 5.1617 | 0.9293 |
| 4.1538 | 54 | 5.1543 | 0.9293 |
| 4.3077 | 56 | 5.1480 | 0.9293 |
| 4.4615 | 58 | 5.1428 | 0.9291 |
| 4.6154 | 60 | 5.1292 | 0.9298 |
| 4.7692 | 62 | 5.1271 | 0.9276 |
| 4.9231 | 64 | 5.1133 | 0.9276 |
| 5.0769 | 66 | 5.0928 | 0.9270 |
| 5.2308 | 68 | 5.0874 | 0.9270 |
| 5.3846 | 70 | 5.0755 | 0.9270 |
| 5.5385 | 72 | 5.0665 | 0.9270 |
| 5.6923 | 74 | 5.0676 | 0.9293 |
| 5.8462 | 76 | 5.0747 | 0.9293 |
| 6.0 | 78 | 5.0647 | 0.9295 |
| 6.1538 | 80 | 5.0763 | 0.9273 |
| 6.3077 | 82 | 5.0832 | 0.9272 |
| 6.4615 | 84 | 5.0750 | 0.9289 |
| 6.6154 | 86 | 5.0547 | 0.9289 |
| 6.7692 | 88 | 5.0350 | 0.9308 |
| 6.9231 | 90 | 5.0221 | 0.9308 |
| 7.0769 | 92 | 5.0107 | 0.9308 |
| 7.2308 | 94 | 4.9967 | 0.9297 |
| 7.3846 | 96 | 4.9983 | 0.9297 |
| 7.5385 | 98 | 5.0026 | 0.9277 |
| 7.6923 | 100 | 5.0095 | 0.9277 |
| 7.8462 | 102 | 5.0102 | 0.9277 |
| 8.0 | 104 | 5.0055 | 0.9271 |
| 8.1538 | 106 | 5.0031 | 0.9271 |
| 8.3077 | 108 | 4.9976 | 0.9271 |
| 8.4615 | 110 | 4.9941 | 0.9271 |
| 8.6154 | 112 | 4.9856 | 0.9276 |
| 8.7692 | 114 | 4.9821 | 0.9276 |
| 8.9231 | 116 | 4.9782 | 0.9276 |
| 9.0769 | 118 | 4.9706 | 0.9276 |
| 9.2308 | 120 | 4.9646 | 0.9276 |
| 9.3846 | 122 | 4.9584 | 0.9276 |
| 9.5385 | 124 | 4.9537 | 0.9276 |
| 9.6923 | 126 | 4.9499 | 0.9276 |
| 9.8462 | 128 | 4.9485 | 0.9276 |
| 10.0 | 130 | 4.9463 | 0.9276 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.8.10
- Sentence Transformers: 3.1.0
- Transformers: 4.44.2
- PyTorch: 2.3.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
CoSENTLoss
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
- Downloads last month
- 11
Model tree for Husain/tiktok_embedding
Base model
sentence-transformers/all-MiniLM-L6-v2Dataset used to train Husain/tiktok_embedding
Evaluation results
- Pearson Cosine on sts devself-reported0.919
- Spearman Cosine on sts devself-reported0.928
- Pearson Manhattan on sts devself-reported0.899
- Spearman Manhattan on sts devself-reported0.932
- Pearson Euclidean on sts devself-reported0.901
- Spearman Euclidean on sts devself-reported0.929
- Pearson Dot on sts devself-reported0.879
- Spearman Dot on sts devself-reported0.896
- Pearson Max on sts devself-reported0.919
- Spearman Max on sts devself-reported0.932