--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:42459 - loss:TripletLoss base_model: sentence-transformers/all-MiniLM-L6-v2 widget: - source_sentence: policy for how can i verify if a tekton task version is still supported by checking for the build.appstudio.redhat.com/expires-on annotation? sentences: - 'Helper: lib.to_array Signature: to_array(s) Description: ' - 'Helper: lib.pipelinerun_attestations Signature: pipelinerun_attestations Description: ' - 'Helper: lib.k8s.name Signature: name(resource) Description: ' - source_sentence: how to check attestation is missing statement field. sentences: - 'Helper: lib.k8s.name Signature: name(resource) Description: ' - 'Helper: lib.tekton.untrusted_task_refs Signature: untrusted_task_refs(tasks) Description: ' - 'Helper: lib.k8s.version Signature: version(resource) Description: ' - source_sentence: I need to ensure the operators.openshift.io/valid-subscription annotation in the ClusterServiceVersion manifest contains a valid JSON encoded non-empty array of strings. sentences: - 'Helper: lib.to_array Signature: to_array(s) Description: ' - 'Helper: lib.image.equal_ref Signature: equal_ref(ref1, ref2) Description: ' - 'Helper: lib.result_helper Signature: result_helper(chain, failure_sprintf_params) Description: ' - source_sentence: write a rule to deny approval for an container image with non-unique RPM names sentences: - 'Helper: lib.result_helper Signature: result_helper(chain, failure_sprintf_params) Description: ' - 'Helper: lib.to_set Signature: to_set(arr) Description: ' - 'Helper: lib.rule_data_defaults Signature: rule_data_defaults Description: ' - source_sentence: check if i need to validate that spdx package is an operating system component. sentences: - 'Helper: lib.to_set Signature: to_set(arr) Description: ' - 'Helper: lib.rule_data_defaults Signature: rule_data_defaults Description: ' - 'Helper: lib.result_helper Signature: result_helper(chain, failure_sprintf_params) Description: ' pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy model-index: - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 results: - task: type: triplet name: Triplet dataset: name: retrieval eval type: retrieval-eval metrics: - type: cosine_accuracy value: 0.9834675788879395 name: Cosine Accuracy --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) - **Maximum Sequence Length:** 256 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'}) (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'check if i need to validate that spdx package is an operating system component.', 'Helper: lib.result_helper\nSignature: result_helper(chain, failure_sprintf_params)\nDescription: ', 'Helper: lib.to_set\nSignature: to_set(arr)\nDescription: ', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[ 1.0000, 0.4979, -0.4443], # [ 0.4979, 1.0000, -0.4918], # [-0.4443, -0.4918, 1.0000]]) ``` ## Evaluation ### Metrics #### Triplet * Dataset: `retrieval-eval` * Evaluated with [TripletEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:--------------------|:-----------| | **cosine_accuracy** | **0.9835** | ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 42,459 training samples * Columns: sentence_0, sentence_1, and sentence_2 * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | sentence_2 | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Samples: | sentence_0 | sentence_1 | sentence_2 | |:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------| | I need to ensure that only images from specific registries are used in our policy | Helper: lib.image.str
Signature: str(d)
Description:
| Helper: lib.konflux.is_validating_image_index
Signature: is_validating_image_index
Description:
| | check if check warn | Helper: lib.tekton.expiry_of
Signature: expiry_of(task)
Description:
| Helper: lib.tekton.untagged_task_references
Signature: untagged_task_references(tasks)
Description:
| | verify that task has an expiry date set. | Helper: lib.tekton.task_param
Signature: task_param(task, name)
Description:
| Helper: lib.tekton.untagged_task_references
Signature: untagged_task_references(tasks)
Description:
| * Loss: [TripletLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters: ```json { "distance_metric": "TripletDistanceMetric.COSINE", "triplet_margin": 0.5 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `num_train_epochs`: 5 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `parallelism_config`: None - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `project`: huggingface - `trackio_space_id`: trackio - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: no - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: True - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin - `router_mapping`: {} - `learning_rate_mapping`: {}
### Training Logs | Epoch | Step | Training Loss | retrieval-eval_cosine_accuracy | |:------:|:----:|:-------------:|:------------------------------:| | 0.5 | 166 | - | 0.9731 | | 1.0 | 332 | - | 0.9786 | | 1.5 | 498 | - | 0.9794 | | 1.5060 | 500 | 0.0784 | - | | 2.0 | 664 | - | 0.9816 | | 2.5 | 830 | - | 0.9826 | | 3.0 | 996 | - | 0.9835 | ### Framework Versions - Python: 3.12.9 - Sentence Transformers: 5.2.0 - Transformers: 4.57.3 - PyTorch: 2.7.1+cu128 - Accelerate: 1.12.0 - Datasets: 4.4.1 - Tokenizers: 0.22.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### TripletLoss ```bibtex @misc{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Alexander Hermans and Lucas Beyer and Bastian Leibe}, year={2017}, eprint={1703.07737}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```