SentenceTransformer based on Snowflake/snowflake-arctic-embed-m

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-m
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("bsmith3715/legal-ft-demo_final")
# Run inference
sentences = [
    'What muscle groups are primarily engaged during the lunge exercise described in the context?',
    "hips and we're gonna lunge it down the\nweight is going to feel a little bit\nlight we're working more stabilizers to\nstart here\nstabilizers through the hips ankles\nknees\nall the things okay lunge it down I like\nto put my foot right up against that\nedge\nto help me have a nice grip toes are off\nof the carriage\nand then coming up to squeeze up on a\nstraight standing leg squeeze up through\nthe glute\ndown\nand\nsqueeze and lift good\nthe slower you move here the more work\nyou're going to feel through those quads\nand glutes as well\nslow back\nslow up\nI know sometimes we feel like we want to\nget that heart rate going\nbut sometimes we need this slow movement\nis going to give us even more benefits\nto support us for those fast movements\nlater",
    "one\nfind that lengthened position you're\ngoing to lift up slide those shoulder\nblades down the back lift up towards the\nsky big inhale\nand exhale back and away\nleft hand to Center\nturn back towards me and lift I\napologize if you're not on the same side\nwhen you're facing me\nother arm to Center bottom arm all right\nwe're gonna flip around\nto do the same thing on the other side\nso lifting up tall\nplace that hand in front of your\nshoulder we're going up and over long\nspine and then lift shoulders down again\nup and over\nand then you use that oblique to lift up\nexhale lift\nand lengthen\nopen the spine Flex\noblique to come up\nand three\ntwo\nand one\ngood full mermaid now up and over turn\nto face the ground separate those arms",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.45
cosine_accuracy@3 0.69
cosine_accuracy@5 0.73
cosine_accuracy@10 0.86
cosine_precision@1 0.45
cosine_precision@3 0.23
cosine_precision@5 0.146
cosine_precision@10 0.086
cosine_recall@1 0.45
cosine_recall@3 0.69
cosine_recall@5 0.73
cosine_recall@10 0.86
cosine_ndcg@10 0.6469
cosine_mrr@10 0.5798
cosine_map@100 0.5877

Training Details

Training Dataset

Unnamed Dataset

  • Size: 200 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 200 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 12 tokens
    • mean: 16.91 tokens
    • max: 24 tokens
    • min: 96 tokens
    • mean: 159.61 tokens
    • max: 176 tokens
  • Samples:
    sentence_0 sentence_1
    What type of spring is the instructor using for the workout? hi guys thanks for joining me today we
    have a really fun really challenging
    workout for you today before we get
    started don't forget to like share
    subscribe feel free to leave me those
    super likes I really appreciate you guys
    joining me for these workouts we're
    going to get started setting up foot
    bars all the way down I'm gonna go on to
    one red Spring today which is going to
    be one heavy spring on my reformer again
    I'm gonna go really heavy for my arms
    today if this is way too much for you
    guys you can do a blue instead of a red
    or a medium instead of a heavy spring
    again it is going to be very heavy for
    arms so feel free to change as needed we
    are going to start first by straddling
    your reformers your feet are on the
    What should participants do if the red spring is too heavy for them? hi guys thanks for joining me today we
    have a really fun really challenging
    workout for you today before we get
    started don't forget to like share
    subscribe feel free to leave me those
    super likes I really appreciate you guys
    joining me for these workouts we're
    going to get started setting up foot
    bars all the way down I'm gonna go on to
    one red Spring today which is going to
    be one heavy spring on my reformer again
    I'm gonna go really heavy for my arms
    today if this is way too much for you
    guys you can do a blue instead of a red
    or a medium instead of a heavy spring
    again it is going to be very heavy for
    arms so feel free to change as needed we
    are going to start first by straddling
    your reformers your feet are on the
    What is the initial position described for starting the workout on the reformers? are going to start first by straddling
    your reformers your feet are on the
    floor we're going to take our hands to
    our shoulder blocks we're just going to
    do a quick stretch before we get moving
    so I'm going to bend my knees slightly
    I'm going to inhale press my Carriage
    out let my chest drop down in between my
    arms and then my exhale I'm going to
    tuck my pelvis around through my spine
    to come back in inhale press out let
    your chest drop down X tail tuck around
    to come in we have two more again after
    this we're going to get a really good
    workout in today last one
    and then round and come in all right now
    once we bring it back in we're going to
    take our knees onto our carriages and
    then our hands are going to go into our
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 20 0.6130
2.0 40 0.6454
2.5 50 0.6445
3.0 60 0.6498
4.0 80 0.6507
5.0 100 0.6463
6.0 120 0.6433
7.0 140 0.6461
7.5 150 0.6409
8.0 160 0.6417
9.0 180 0.6425
10.0 200 0.6469

Framework Versions

  • Python: 3.13.2
  • Sentence Transformers: 4.1.0
  • Transformers: 4.51.3
  • PyTorch: 2.7.0+cpu
  • Accelerate: 1.7.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
3
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bsmith3715/legal-ft-demo_final

Finetuned
(52)
this model

Spaces using bsmith3715/legal-ft-demo_final 9

Papers for bsmith3715/legal-ft-demo_final

Evaluation results