SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5

This is a sentence-transformers model finetuned from Alibaba-NLP/gte-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Alibaba-NLP/gte-base-en-v1.5
  • Maximum Sequence Length: 64 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: NewModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("albertus-sussex/veriscrape-sbert-auto-wo-ref-deepseek-chat")
# Run inference
sentences = [
    'Small cargo area',
    'See listings',
    '2010 MINI Cooper',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.9585

Silhouette

  • Evaluated with veriscrape.training.SilhouetteEvaluator
Metric Value
silhouette_cosine 0.2151
silhouette_euclidean 0.2116

Triplet

Metric Value
cosine_accuracy 0.9626

Silhouette

  • Evaluated with veriscrape.training.SilhouetteEvaluator
Metric Value
silhouette_cosine 0.2281
silhouette_euclidean 0.2266

Training Details

Training Dataset

Unnamed Dataset

  • Size: 26,649 training samples
  • Columns: anchor, positive, negative, pos_attr_name, neg_attr_name, and website_id
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative pos_attr_name neg_attr_name website_id
    type string string string string string int
    details
    • min: 3 tokens
    • mean: 8.37 tokens
    • max: 64 tokens
    • min: 3 tokens
    • mean: 8.34 tokens
    • max: 64 tokens
    • min: 3 tokens
    • mean: 8.97 tokens
    • max: 64 tokens
    • min: 3 tokens
    • mean: 3.29 tokens
    • max: 5 tokens
    • min: 3 tokens
    • mean: 3.19 tokens
    • max: 5 tokens
    • 0: ~3.10%
    • 1: ~2.70%
    • 2: ~4.20%
    • 3: ~3.00%
    • 4: ~5.10%
    • 5: ~57.70%
    • 6: ~5.80%
    • 7: ~5.00%
    • 8: ~8.30%
    • 9: ~5.10%
  • Samples:
    anchor positive negative pos_attr_name neg_attr_name website_id
    $23,340 Visibility $23,340 engine price 5
    Engine: 6.7L V-12 DOHC with variable valve timing and four valves per cylinder Engine: 5.7L V 8 overhead valve ; two valves per cylinder) New 2011 BMW ActiveHybrid 7 750LI Sedan Performance Specs engine model 7
    $20,995 $67,200 Fuel consumption: city= 20 (mpg); highway= 28 (mpg); combined= 23 (mpg); vehicle range: 377 miles price fuel_economy 7
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 2,961 evaluation samples
  • Columns: anchor, positive, negative, pos_attr_name, neg_attr_name, and website_id
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative pos_attr_name neg_attr_name website_id
    type string string string string string int
    details
    • min: 3 tokens
    • mean: 8.34 tokens
    • max: 64 tokens
    • min: 3 tokens
    • mean: 8.42 tokens
    • max: 64 tokens
    • min: 3 tokens
    • mean: 8.49 tokens
    • max: 64 tokens
    • min: 3 tokens
    • mean: 3.29 tokens
    • max: 5 tokens
    • min: 3 tokens
    • mean: 3.2 tokens
    • max: 5 tokens
    • 0: ~3.40%
    • 1: ~2.20%
    • 2: ~5.30%
    • 3: ~2.60%
    • 4: ~4.80%
    • 5: ~55.90%
    • 6: ~5.50%
    • 7: ~4.70%
    • 8: ~9.90%
    • 9: ~5.70%
  • Samples:
    anchor positive negative pos_attr_name neg_attr_name website_id
    6.0L Gas V8, 360 HP 2.4L Gas I4, 185 HP 18 mpg City, 26 mpg Hwy engine fuel_economy 4
    $37,770 $23,995 2010 Mercedes-Benz C-Class engine model 5
    • Destination Charge: $725 2010 Honda Pilot engine model 5
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • num_train_epochs: 5
  • warmup_ratio: 0.1

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss cosine_accuracy silhouette_cosine
-1 -1 - - 0.5968 0.1930
1.0 209 0.6134 0.6043 0.9517 0.1809
2.0 418 0.1704 0.5569 0.9554 0.2491
3.0 627 0.1261 0.5098 0.9588 0.2111
4.0 836 0.1042 0.5363 0.9558 0.2272
5.0 1045 0.0868 0.5157 0.9585 0.2151
-1 -1 - - 0.9626 0.2281

Framework Versions

  • Python: 3.10.16
  • Sentence Transformers: 4.0.1
  • Transformers: 4.45.2
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.1.0
  • Tokenizers: 0.20.3

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for albertus-sussex/veriscrape-sbert-auto-wo-ref-deepseek-chat

Finetuned
(825)
this model

Papers for albertus-sussex/veriscrape-sbert-auto-wo-ref-deepseek-chat

Evaluation results