metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:16692
- loss:MultipleNegativesRankingLoss
base_model: google/embeddinggemma-300m
widget:
- source_sentence: Al Klug position played tackle
sentences:
- Alfred Klug position played tackle
- Brad Edwards position played Safety postion
- Michael Jackson's Ghosts position played tackle
- source_sentence: Istanbul Province capital Istanbul
sentences:
- Istanbul Province capital İstanbul
- The Man Between capital Istanbul
- De Nederlandsche Bank currency euros
- source_sentence: Rope director Alfred Hitchcock
sentences:
- Piano Concerto in F major composer Wolfgang Amadeus Mozart
- Rope director Eswatini
- Rope director The Master of Suspense
- source_sentence: Jadwiga Kiszczak member of Solidarity
sentences:
- Jadwiga Kiszczak member of Solidarność
- Jadwiga Kiszczak member of Lada Zapad Tolyatti
- Rocawear chief executive officer HOV
- source_sentence: Armenia Stock Exchange currency Armenian dram
sentences:
- ARMEX currency Armenian dram
- Courrendlin currency Armenian dram
- Alexander Tetelbaum notable work artificial stupidity
pipeline_tag: sentence-similarity
library_name: sentence-transformers
SentenceTransformer based on google/embeddinggemma-300m
This is a sentence-transformers model finetuned from google/embeddinggemma-300m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: google/embeddinggemma-300m
- Maximum Sequence Length: 2048 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'Gemma3TextModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 3072, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Dense({'in_features': 3072, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(4): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("michaeleliot/claim-model")
# Run inference
queries = [
"Armenia Stock Exchange currency Armenian dram",
]
documents = [
'ARMEX currency Armenian dram',
'Courrendlin currency Armenian dram',
'Alexander Tetelbaum notable work artificial stupidity',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[ 0.7450, 0.0545, -0.0502]])
Training Details
Training Dataset
Unnamed Dataset
- Size: 16,692 training samples
- Columns:
anchor,positive, andnegative - Approximate statistics based on the first 1000 samples:
anchor positive negative type string string string details - min: 5 tokens
- mean: 10.57 tokens
- max: 27 tokens
- min: 5 tokens
- mean: 11.4 tokens
- max: 41 tokens
- min: 5 tokens
- mean: 10.6 tokens
- max: 37 tokens
- Samples:
anchor positive negative Amsterdam-Pleyel group chairperson Henri BarbusseAmsterdam-Pleyel movement chairperson Henri BarbusseRegius Professor of Greek chairperson Henri BarbusseAli Muksin religion IslamAli Muksin religion MohammedanismAli Muksin religion TogliattiAzotSusanne Wampfler member of International Astronomical UnionSusanne Wampfler member of International Astronomical UnionLed Zeppelin member of International Astronomical Union - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size: 4gradient_accumulation_steps: 4learning_rate: 2e-05num_train_epochs: 1lr_scheduler_type: cosinewarmup_ratio: 0.1fp16: Truedataloader_num_workers: 4gradient_checkpointing: True
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 4per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 4eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 1max_steps: -1lr_scheduler_type: cosinelr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 4dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Truegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}
Training Logs
| Epoch | Step | Training Loss |
|---|---|---|
| 0.0479 | 50 | 0.0713 |
| 0.0959 | 100 | 0.0909 |
| 0.1438 | 150 | 0.0582 |
| 0.1917 | 200 | 0.0219 |
| 0.2396 | 250 | 0.0345 |
| 0.2876 | 300 | 0.0213 |
| 0.3355 | 350 | 0.0191 |
| 0.3834 | 400 | 0.0128 |
| 0.4313 | 450 | 0.0355 |
| 0.4793 | 500 | 0.0341 |
| 0.5272 | 550 | 0.0358 |
| 0.5751 | 600 | 0.0038 |
| 0.6231 | 650 | 0.0049 |
| 0.6710 | 700 | 0.0274 |
| 0.7189 | 750 | 0.019 |
| 0.7668 | 800 | 0.0061 |
| 0.8148 | 850 | 0.0228 |
| 0.8627 | 900 | 0.0121 |
| 0.9106 | 950 | 0.0012 |
| 0.9585 | 1000 | 0.002 |
Framework Versions
- Python: 3.12.12
- Sentence Transformers: 5.2.0
- Transformers: 4.57.3
- PyTorch: 2.9.0+cu126
- Accelerate: 1.12.0
- Datasets: 4.0.0
- Tokenizers: 0.22.2
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}