SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5

This is a sentence-transformers model finetuned from Alibaba-NLP/gte-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Alibaba-NLP/gte-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("albertus-sussex/veriscrape-sbert-camera-reference_8_to_verify_2-fold-7")
# Run inference
sentences = [
    'Samsung HZ50W 14 Megapixels Digital Camera - Black',
    'ViviCam 7024 Compact Camera',
    '$229.99',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 1.0

Silhouette

  • Evaluated with veriscrape.training.SilhouetteEvaluator
Metric Value
silhouette_cosine 0.9811
silhouette_euclidean 0.8752

Triplet

Metric Value
cosine_accuracy 1.0

Silhouette

  • Evaluated with veriscrape.training.SilhouetteEvaluator
Metric Value
silhouette_cosine 0.9814
silhouette_euclidean 0.8754

Training Details

Training Dataset

Unnamed Dataset

  • Size: 8,009 training samples
  • Columns: anchor, positive, negative, pos_attr_name, and neg_attr_name
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative pos_attr_name neg_attr_name
    type string string string string string
    details
    • min: 3 tokens
    • mean: 12.05 tokens
    • max: 79 tokens
    • min: 3 tokens
    • mean: 12.15 tokens
    • max: 73 tokens
    • min: 3 tokens
    • mean: 10.47 tokens
    • max: 59 tokens
    • min: 3 tokens
    • mean: 3.0 tokens
    • max: 3 tokens
    • min: 3 tokens
    • mean: 3.0 tokens
    • max: 3 tokens
  • Samples:
    anchor positive negative pos_attr_name neg_attr_name
    Panasonic Lumix DMC-FH20 Point & Shoot Digital Camera - 14.1 Megapixel - 2.70" Active Matrix TFT Color LCD - Black Polaroid I1237 Point & Shoot Digital Camera - 12 Megapixel - 2.70" Active... $148.99 model price
    Samsung PL90 12.2 Megapixel Compact Camera - 5 mm-20.20 mm - Red Samsung AQ100 Point & Shoot Digital Camera - 12 Megapixel - 2.7" Active Matrix TFT Color LCD - Red $279.00 model price
    Polaroid Casio Computer Co., Ltd $10.99 manufacturer price
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 890 evaluation samples
  • Columns: anchor, positive, negative, pos_attr_name, and neg_attr_name
  • Approximate statistics based on the first 890 samples:
    anchor positive negative pos_attr_name neg_attr_name
    type string string string string string
    details
    • min: 3 tokens
    • mean: 11.97 tokens
    • max: 73 tokens
    • min: 3 tokens
    • mean: 12.03 tokens
    • max: 73 tokens
    • min: 3 tokens
    • mean: 10.49 tokens
    • max: 44 tokens
    • min: 3 tokens
    • mean: 3.0 tokens
    • max: 3 tokens
    • min: 3 tokens
    • mean: 3.0 tokens
    • max: 3 tokens
  • Samples:
    anchor positive negative pos_attr_name neg_attr_name
    General Electric 10 Megapixel Digital Camera 2.4 inch LCD - Red PowerShot SD1300 IS Compact Camera Vtech model manufacturer
    Fujifilm FUJIFILM Fuji Film Finepix S1800 12 Megapixel Digital Camera - Black manufacturer model
    Olympus Casio $334.00 manufacturer price
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • num_train_epochs: 5
  • warmup_ratio: 0.1

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss cosine_accuracy silhouette_cosine
-1 -1 - - 0.7921 0.3433
1.0 63 0.2691 0.0 1.0 0.9809
2.0 126 0.0 0.0 1.0 0.9811
3.0 189 0.0 0.0 1.0 0.9811
4.0 252 0.0 0.0 1.0 0.9811
5.0 315 0.0 0.0 1.0 0.9811
-1 -1 - - 1.0 0.9814

Framework Versions

  • Python: 3.10.16
  • Sentence Transformers: 4.0.1
  • Transformers: 4.45.2
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.5.2
  • Datasets: 3.1.0
  • Tokenizers: 0.20.3

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
3
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for albertus-sussex/veriscrape-sbert-camera-reference_8_to_verify_2-fold-7

Finetuned
(824)
this model

Papers for albertus-sussex/veriscrape-sbert-camera-reference_8_to_verify_2-fold-7

Evaluation results