SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5

This is a sentence-transformers model finetuned from Alibaba-NLP/gte-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Alibaba-NLP/gte-base-en-v1.5
  • Maximum Sequence Length: 64 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: NewModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("albertus-sussex/veriscrape-sbert-auto-wo-ref-gemini-1.5-flash")
# Run inference
sentences = [
    '2WD',
    '4',
    '2011 GMC Canyon Regular Cab 2-door SLE Pickup',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.9395

Silhouette

  • Evaluated with veriscrape.training.SilhouetteEvaluator
Metric Value
silhouette_cosine 0.5814
silhouette_euclidean 0.4622

Triplet

Metric Value
cosine_accuracy 0.9339

Silhouette

  • Evaluated with veriscrape.training.SilhouetteEvaluator
Metric Value
silhouette_cosine 0.5796
silhouette_euclidean 0.4616

Training Details

Training Dataset

Unnamed Dataset

  • Size: 15,310 training samples
  • Columns: anchor, positive, negative, pos_attr_name, neg_attr_name, and website_id
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative pos_attr_name neg_attr_name website_id
    type string string string string string int
    details
    • min: 3 tokens
    • mean: 10.24 tokens
    • max: 64 tokens
    • min: 3 tokens
    • mean: 10.43 tokens
    • max: 64 tokens
    • min: 3 tokens
    • mean: 12.24 tokens
    • max: 64 tokens
    • min: 3 tokens
    • mean: 3.56 tokens
    • max: 5 tokens
    • min: 3 tokens
    • mean: 3.4 tokens
    • max: 5 tokens
    • 0: ~8.30%
    • 1: ~3.90%
    • 2: ~9.70%
    • 3: ~5.60%
    • 4: ~9.70%
    • 5: ~9.20%
    • 6: ~27.20%
    • 7: ~8.40%
    • 8: ~8.70%
    • 9: ~9.30%
  • Samples:
    anchor positive negative pos_attr_name neg_attr_name website_id
    Automatic, 5-Spd w/Overdrive 8 ft 2010 Mercedes-Benz G-Class 4-door G550 Sport Utility engine model 6
    2011 Nissan Versa S (M6) Hatchback w/o FE+ 2011 Chevrolet Colorado Work Truck 4x2 Standard Cab $ 24,550 model price 3
    23 mpg City / 36 mpg Hwy 2WD 2011 Honda CR-V 4-door EX-L Sport Utility fuel_economy model 6
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 1,702 evaluation samples
  • Columns: anchor, positive, negative, pos_attr_name, neg_attr_name, and website_id
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative pos_attr_name neg_attr_name website_id
    type string string string string string int
    details
    • min: 3 tokens
    • mean: 10.26 tokens
    • max: 64 tokens
    • min: 3 tokens
    • mean: 10.25 tokens
    • max: 64 tokens
    • min: 3 tokens
    • mean: 12.52 tokens
    • max: 64 tokens
    • min: 3 tokens
    • mean: 3.5 tokens
    • max: 5 tokens
    • min: 3 tokens
    • mean: 3.43 tokens
    • max: 5 tokens
    • 0: ~8.30%
    • 1: ~5.20%
    • 2: ~7.70%
    • 3: ~5.30%
    • 4: ~10.70%
    • 5: ~9.70%
    • 6: ~25.80%
    • 7: ~7.90%
    • 8: ~9.40%
    • 9: ~10.00%
  • Samples:
    anchor positive negative pos_attr_name neg_attr_name website_id
    2010 Porsche Cayenne Turbo S Sport Utility 2010 Rolls Royce Phantom Base Sedan Turbocharged model engine 7
    4 Manual, 6-Spd w/Overdrive 2010 Acura MDX 4-door Sport Utility fuel_economy model 6
    $25,690 $66,200 Ram 2500 price
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • num_train_epochs: 5
  • warmup_ratio: 0.1

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss cosine_accuracy silhouette_cosine
-1 -1 - - 0.7991 0.2315
1.0 120 0.6864 0.5453 0.9371 0.6056
2.0 240 0.3469 0.4988 0.9377 0.5648
3.0 360 0.3315 0.4631 0.9395 0.6037
4.0 480 0.3141 0.4836 0.9401 0.5906
5.0 600 0.3045 0.4554 0.9395 0.5814
-1 -1 - - 0.9339 0.5796

Framework Versions

  • Python: 3.10.16
  • Sentence Transformers: 4.0.1
  • Transformers: 4.45.2
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.1.0
  • Tokenizers: 0.20.3

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for albertus-sussex/veriscrape-sbert-auto-wo-ref-gemini-1.5-flash

Finetuned
(828)
this model

Papers for albertus-sussex/veriscrape-sbert-auto-wo-ref-gemini-1.5-flash

Evaluation results