--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:5600 - loss:MultipleNegativesRankingLoss base_model: BAAI/bge-small-en-v1.5 widget: - source_sentence: What is the main factor of signal interference in MCFs? sentences: - The main factor of signal interference in MCFs is crosstalk, which is the leakage of a fraction of the signal power from a given core to its neighboring core. - An integrity group temporal key (IGTK) is a random value used to protect group addressed medium access control (MAC) management protocol data units (MMPDUs) from a broadcast/multicast source station (STA). - Wireless sensing through the combined use of radio wave and AI technologies aims to identify objects and recognize actions with high precision. - source_sentence: What types of drones can be used to construct multi-tier drone-cell networks? sentences: - The coupling coefficient represents the tightness of coupling between transmit and receive coils in wireless charging systems. - A cheap, slow photodiode placed next to the rear face of the laser package is commonly used as the monitor detector in laser drive circuits. - Multi-tier drone-cell networks can be constructed by utilizing several drone types, similar to terrestrial HetNets with macro-, small-, femtocells, and relays. - source_sentence: Which technology was explored for high capacity last mile and pre-aggregation backhaul in small cell networks? sentences: - According to Pearl's Ladder of Causation, counterfactual questions can only be answered if information from all other levels (associational and interventional) is available. Counterfactuals subsume interventional and associational questions, and therefore sit at the top of the hierarchy. - Shannon's classical source coding theorem provides the minimum distortion achievable in encoding a Gaussian stationary input signal. - The passage mentions that 60 GHz and 70-80 GHz millimeter wave communication technologies were explored for high capacity last mile and pre-aggregation backhaul in small cell networks. - source_sentence: What is the main output of the design procedure for a passive lossless Huygens metasurface? sentences: - Entanglement distillation is the process of purifying imperfect entangled states to obtain maximally entangled states. - The main output of the design procedure is the transmitted fields as well as the surface impedance and admittance. - The component of IoT responsible for sensing and collecting data is the sensors. - source_sentence: What is the formula for the relative entropy between two probability density functions? sentences: - The consequence of the fact that the total power radiated varies as the square of the frequency of the oscillation is that shorter wavelength (higher frequency) light is scattered much more strongly than longer wavelength (lower frequency) light. - Hybrid infrastructures are comprised of various proximate and distant computing nodes, either mobile or immobile. - The relative entropy between two probability density functions f and g is equal to the negative integral of f(x) multiplied by the logarithm of the ratio of f(x) and g(x), with respect to x. pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_recall@1 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: SentenceTransformer based on BAAI/bge-small-en-v1.5 results: - task: type: information-retrieval name: Information Retrieval dataset: name: telecom ir eval type: telecom-ir-eval metrics: - type: cosine_accuracy@1 value: 0.9733333333333334 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.995 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.995 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.995 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.9733333333333334 name: Cosine Precision@1 - type: cosine_recall@1 value: 0.9733333333333334 name: Cosine Recall@1 - type: cosine_ndcg@10 value: 0.985912396714286 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9827777777777778 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.9831452173557438 name: Cosine Map@100 --- # SentenceTransformer based on BAAI/bge-small-en-v1.5 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) on the csv dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - csv ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'What is the formula for the relative entropy between two probability density functions?', 'The relative entropy between two probability density functions f and g is equal to the negative integral of f(x) multiplied by the logarithm of the ratio of f(x) and g(x), with respect to x.', 'The consequence of the fact that the total power radiated varies as the square of the frequency of the oscillation is that shorter wavelength (higher frequency) light is scattered much more strongly than longer wavelength (lower frequency) light.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Information Retrieval * Dataset: `telecom-ir-eval` * Evaluated with [InformationRetrievalEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:-------------------|:-----------| | cosine_accuracy@1 | 0.9733 | | cosine_accuracy@3 | 0.995 | | cosine_accuracy@5 | 0.995 | | cosine_accuracy@10 | 0.995 | | cosine_precision@1 | 0.9733 | | cosine_recall@1 | 0.9733 | | **cosine_ndcg@10** | **0.9859** | | cosine_mrr@10 | 0.9828 | | cosine_map@100 | 0.9831 | ## Training Details ### Training Dataset #### csv * Dataset: csv * Size: 5,600 training samples * Columns: anchor and positive * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | anchor | positive | |:-----------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | How can the unique decodability of a code be tested using the Sardinas and Patterson test? | The Sardinas and Patterson test for unique decodability involves checking if no codewords are prefixes of any other codewords. | | What is the purpose of encapsulation in the OSI (Open System Interconnection) model? | Encapsulation is used to add control information and transform data units into protocol data units. | | What advantages do measurements from user equipment (UE) have over drive tests in disaster small cell networks? | Measurements from user equipment (UE) have the advantages of reduced labor intensity, measurements obtained from additional locations, such as inside buildings, and better adaptation to specific characteristics and requirements in disaster scenarios. | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### csv * Dataset: csv * Size: 1,400 evaluation samples * Columns: anchor and positive * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | anchor | positive | |:--------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | What are the three major steps in SLAM-based techniques for THz localization? | SLAM-based techniques for THz localization involve imaging the environment, estimating ranges to the user, and fusing the images with the estimated ranges. | | What is the service time distribution in the M/M(X)/1 model? | In the M/M(X)/1 model, the service time distribution is exponential with parameter µ. | | What is the main advantage of the ensemble patch method in generating adversarial patches? | The main advantage of the ensemble patch method is that it achieves a higher attack success rate compared to single patches. | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `weight_decay`: 0.01 - `num_train_epochs`: 5 - `lr_scheduler_type`: cosine_with_restarts - `warmup_ratio`: 0.1 - `fp16`: True - `load_best_model_at_end`: True - `batch_sampler`: no_duplicates #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.01 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: cosine_with_restarts - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional
### Training Logs | Epoch | Step | Training Loss | Validation Loss | telecom-ir-eval_cosine_ndcg@10 | |:----------:|:-------:|:-------------:|:---------------:|:------------------------------:| | 1.1364 | 50 | 0.2567 | 0.0419 | 0.9844 | | **2.2727** | **100** | **0.0502** | **0.0397** | **0.9859** | | 3.4091 | 150 | 0.0277 | 0.0399 | 0.9846 | | 4.5455 | 200 | 0.0231 | 0.0406 | 0.9840 | | 5.0 | 220 | - | - | 0.9859 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.3.1 - Transformers: 4.47.1 - PyTorch: 2.5.1+cu121 - Accelerate: 1.2.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```