hwang2006's picture
Add new SentenceTransformer model
57df7ed verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - dense
  - generated_from_trainer
  - dataset_size:53
  - loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/clip-ViT-L-14
widget:
  - source_sentence: >-
      The Hugging Face Transformers Library | Example Code + Chatbot UI with
      Gradio
    sentences:
      - Shit Happens, Stay Solution Oriented
      - 3 Ways to Make a Custom AI Assistant | RAG, Tools, & Fine-tuning
      - How to Manage Data Science Projects
  - source_sentence: 5 Questions Every Data Scientist Should Hardcode into Their Brain
    sentences:
      - 5 AI Projects You Can Build This Weekend (with Python)
      - An Introduction to Decision Trees | Gini Impurity & Python Code
      - How to Deploy ML Solutions with FastAPI, Docker, & AWS
  - source_sentence: My $100,000+ Data Science Resume (what got me hired)
    sentences:
      - The Mapper Algorithm | Overview & Python Example Code
      - How to Build Data Pipelines for ML Projects (w/ Python Code)
      - How to Make a Data Science Portfolio With GitHub Pages (2024)
datasets:
  - shawhin/yt-title-thumbnail-pairs
pipeline_tag: sentence-similarity
library_name: sentence-transformers

SentenceTransformer based on sentence-transformers/clip-ViT-L-14

This is a sentence-transformers model finetuned from sentence-transformers/clip-ViT-L-14 on the yt-title-thumbnail-pairs dataset. It maps sentences & paragraphs to a None-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): CLIPModel()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("hwang2006/clip-title-thumbnail-embeddings")
# Run inference
sentences = [
    'My $100,000+ Data Science Resume (what got me hired)',
    'The Mapper Algorithm | Overview & Python Example Code',
    'How to Build Data Pipelines for ML Projects (w/ Python Code)',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.2549, 0.3967],
#         [0.2549, 1.0000, 0.2753],
#         [0.3967, 0.2753, 1.0000]])

Training Details

Training Dataset

yt-title-thumbnail-pairs

  • Dataset: yt-title-thumbnail-pairs at c1b9a13
  • Size: 53 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 53 samples:
    anchor positive negative
    type PIL.JpegImagePlugin.JpegImageFile string string
    details
    • min: 9 tokens
    • mean: 15.04 tokens
    • max: 27 tokens
    • min: 10 tokens
    • mean: 15.3 tokens
    • max: 27 tokens
  • Samples:
    anchor positive negative
    Multimodal RAG: A Beginner-friendly Guide (with Python Code) What Nature Can Teach Us About Business...
    Detecting Power Laws in Real-world Data | w/ Python Code I Have 90 Days to Make $10k/mo—Here's my plan
    I Quit My Job… Here’s How Much I Made 1 Year Later Persistent Homology | Introduction & Python Example Code
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Evaluation Dataset

yt-title-thumbnail-pairs

  • Dataset: yt-title-thumbnail-pairs at c1b9a13
  • Size: 11 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 11 samples:
    anchor positive negative
    type PIL.JpegImagePlugin.JpegImageFile string string
    details
    • min: 8 tokens
    • mean: 14.27 tokens
    • max: 21 tokens
    • min: 8 tokens
    • mean: 14.36 tokens
    • max: 19 tokens
  • Samples:
    anchor positive negative
    I Was Wrong About AI Consulting (what I learned) How to Make a Data Science Portfolio With GitHub Pages (2024)
    My $100,000+ Data Science Resume (what got me hired) The Mapper Algorithm | Overview & Python Example Code
    4 Skills You Need to Be a Full-Stack Data Scientist Fine-Tuning Text Embeddings For Domain-specific Search (w/ Python)
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • learning_rate: 1e-05
  • num_train_epochs: 2
  • use_cpu: True
  • dataloader_pin_memory: False

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 8
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: True
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: False
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss
0.1429 1 1.4044 -
0.2857 2 1.4737 -
0.4286 3 1.015 -
0.5714 4 1.0862 -
0.7143 5 0.5453 -
0.8571 6 0.8977 -
1.0 7 0.4673 0.9324
1.1429 8 0.1536 -
1.2857 9 0.3095 -
1.4286 10 0.1654 -
1.5714 11 0.2522 -
1.7143 12 0.1947 -
1.8571 13 0.1257 -
2.0 14 0.1009 0.9172

Framework Versions

  • Python: 3.12.11
  • Sentence Transformers: 5.1.2
  • Transformers: 4.57.1
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.12.0
  • Datasets: 3.3.2
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}