SentenceTransformer based on sentence-transformers/all-mpnet-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-mpnet-base-v2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-mpnet-base-v2
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False, 'architecture': 'MPNetModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("zazabe/categorizer-mpnet-v2-mnrl")
# Run inference
sentences = [
    'Software: SEPA-Batch Profi. Manufacturer: None. Focus: Specialized in handling SEPA and related banking formats for batch processing of payments and direct debits, targeting businesses and organizations needing efficient electronic payment management.. Features: SEPA-Batch Profi enables the creation and management of cashless bookings in SEPA/DTAUS/DTAZV formats, supporting both domestic and EU credit and debit transactions. It helps reduce transaction costs by using widely supported banking formats. The software includes import functions for clients and booking records.. Security: not found.',
    'Business Applications > Finance > Payment Systems',
    'Tools and Utilities > Utilities > Desktop Enhancements',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000,  0.4858, -0.0712],
#         [ 0.4858,  1.0000,  0.0091],
#         [-0.0712,  0.0091,  1.0000]])

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.5725
cosine_accuracy@3 0.8
cosine_accuracy@5 0.8431
cosine_precision@1 0.5725
cosine_precision@3 0.2667
cosine_precision@5 0.1686
cosine_precision@10 0.0922
cosine_recall@1 0.5725
cosine_recall@3 0.8
cosine_recall@5 0.8431
cosine_recall@10 0.9216
cosine_ndcg@10 0.7514
cosine_mrr@10 0.6966
cosine_map@100 0.7008

Training Details

Training Dataset

Unnamed Dataset

  • Size: 3,011 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 12 tokens
    • mean: 104.08 tokens
    • max: 214 tokens
    • min: 3 tokens
    • mean: 9.4 tokens
    • max: 21 tokens
  • Samples:
    anchor positive
    Software: ASUS DisplayWidget. Manufacturer: ASUS. Focus: ASUS DisplayWidget is a utility designed to provide users with an easy way to customize and optimize their display settings for various tasks and preferences. It is primarily targeted at ASUS monitor users.. Features: ASUS DisplayWidget allows users to adjust display settings, such as brightness, contrast, color temperature, and gamma, directly from the desktop. It also offers preset modes for different usage scenarios, like gaming, reading, or watching movies. Additionally, it allows users to save and load custom display profiles.. Security: None. Utilities
    Software: Amazon JDBC Driver. Manufacturer: Amazon. Focus: Designed for Java developers needing to integrate their applications with Amazon's database offerings.. Features: Enables Java applications to connect to Amazon Web Services databases. Supports various AWS database services. Provides optimized performance for AWS environments.. Security: Leverages AWS security features. Supports encryption and secure connections.. Databases > Database Management Systems (DBMS) > Other Database Management Systems (DBMS)
    Software: SIMATIC Event Database. Manufacturer: Siemens. Focus: Designed for industrial automation systems, providing a structured approach to managing and analyzing events within SIMATIC environments.. Features: Centralized event logging and archiving, comprehensive diagnostics, and efficient troubleshooting.. Security: None. Database Management Systems (DBMS)
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 255 evaluation samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 255 samples:
    anchor positive
    type string string
    details
    • min: 29 tokens
    • mean: 115.56 tokens
    • max: 217 tokens
    • min: 8 tokens
    • mean: 13.33 tokens
    • max: 21 tokens
  • Samples:
    anchor positive
    Software: SeaTools. Manufacturer: Seagate Technology LLC. Focus: Hard drive diagnostic tool for Seagate, Samsung, LaCie and Maxtor drives.. Features: Diagnoses hard drives, including identifying the make and model, serial number, firmware revision, drive size, and supported features. Performs several tests and provides detailed drive information.. Security: None. Tools and Utilities > Utilities > Accessibility and Assistive Tools
    Software: Stellar Repair for Access. Manufacturer: Stellar Data Recovery Inc.. Focus: Designed to repair corrupt Microsoft Access databases, restoring them to a usable state. It caters to individuals and businesses that rely on Access databases for data management.. Features: Repairs corrupt Access databases (MDB and ACCDB files). Recovers tables, forms, reports, modules, and other database objects. Supports recovery of deleted records.. Security: Does not modify the original database file during the repair process.. Tools and Utilities > Utilities > Accessibility and Assistive Tools
    Software: Citrix Monitor Service PowerShell snap-in. Manufacturer: Citrix Systems, Inc.. Focus: Designed for IT administrators and support staff responsible for managing and monitoring Citrix virtual application and desktop deployments.. Features: Provides real-time and historical monitoring of Citrix Virtual Apps and Desktops environments. Allows administrators to troubleshoot issues, identify trends, and optimize performance. Offers customizable dashboards and reporting capabilities.. Security: Leverages Citrix security protocols for data transmission and access control. Role-based access control to restrict access to sensitive monitoring data.. IT Infrastructure > IT Management > Alerts and Monitoring Tools
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • gradient_accumulation_steps: 8
  • weight_decay: 0.01
  • num_train_epochs: 10
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • optim: adamw_bnb_8bit

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 8
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_bnb_8bit
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Validation Loss software_val_set_cosine_ndcg@10
1.0 24 0.3942 0.6777
2.0 48 0.3664 0.7310
3.0 72 0.3393 0.7453
4.0 96 0.3662 0.7555
5.0 120 0.3718 0.7492
6.0 144 0.3595 0.7435
7.0 168 0.3490 0.7402
8.0 192 0.3813 0.7462
9.0 216 0.3641 0.7536
10.0 240 0.3595 0.7514
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.3
  • Sentence Transformers: 5.2.0
  • Transformers: 4.57.3
  • PyTorch: 2.9.1+cu128
  • Accelerate: 1.12.0
  • Datasets: 4.4.2
  • Tokenizers: 0.22.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
22
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zazabe/categorizer-mpnet-v2-mnrl

Finetuned
(345)
this model

Papers for zazabe/categorizer-mpnet-v2-mnrl

Evaluation results