| | --- |
| | language: |
| | - en |
| | tags: |
| | - sentence-transformers |
| | - sentence-similarity |
| | - feature-extraction |
| | - dense |
| | - generated_from_trainer |
| | - dataset_size:5749 |
| | - loss:CosineSimilarityLoss |
| | widget: |
| | - source_sentence: The man is shooting an automatic rifle. |
| | sentences: |
| | - A man is shooting a gun. |
| | - A man is driving a car. |
| | - A man is dancing. |
| | - source_sentence: A woman is riding on a horse. |
| | sentences: |
| | - A woman is picking tomatoes. |
| | - A man is chopping a tree trunk with an axe. |
| | - A man is cutting and onion. |
| | - source_sentence: A man is walking outside. |
| | sentences: |
| | - An animal is walking on the ground. |
| | - Dogs are swimming in a pool. |
| | - A man is dancing. |
| | - source_sentence: A woman is riding a motorized scooter down a road. |
| | sentences: |
| | - A girl loses her kite. |
| | - A woman is peeling a potato. |
| | - A man is riding a motor scooter. |
| | - source_sentence: A girl is eating a cupcake. |
| | sentences: |
| | - A woman is eating a cupcake. |
| | - Zebras are socializing. |
| | - A man is skating. |
| | datasets: |
| | - sentence-transformers/stsb |
| | - sentence-transformers/all-nli |
| | - sentence-transformers/msmarco-msmarco-distilbert-base-v3 |
| | pipeline_tag: sentence-similarity |
| | library_name: sentence-transformers |
| | metrics: |
| | - pearson_cosine |
| | - spearman_cosine |
| | model-index: |
| | - name: SentenceTransformer |
| | results: |
| | - task: |
| | type: semantic-similarity |
| | name: Semantic Similarity |
| | dataset: |
| | name: sts dev |
| | type: sts-dev |
| | metrics: |
| | - type: pearson_cosine |
| | value: 0.7575387643007487 |
| | name: Pearson Cosine |
| | - type: spearman_cosine |
| | value: 0.7562749897692814 |
| | name: Spearman Cosine |
| | - task: |
| | type: semantic-similarity |
| | name: Semantic Similarity |
| | dataset: |
| | name: sts test |
| | type: sts-test |
| | metrics: |
| | - type: pearson_cosine |
| | value: 0.6938563172808307 |
| | name: Pearson Cosine |
| | - type: spearman_cosine |
| | value: 0.6784472989904182 |
| | name: Spearman Cosine |
| | license: mit |
| | --- |
| | |
| | # SentenceTransformer (Legacy) |
| |
|
| | **UPDATE:** Consider using [**`johnnyboycurtis/ModernBERT-small-v2`**](https://huggingface.co/johnnyboycurtis/ModernBERT-small-v2) a much more performant model. |
| |
|
| | ## Warning |
| |
|
| | This model was an early exploration into creating a Wide model. |
| |
|
| | **⚠️ Legacy Status: NOT RECOMMENDED.** |
| |
|
| | This initial implementation suffered from suboptimal architectural scaling decisions made during the initialization phase, particularly concerning the feed-forward network capacity relative to the depth. |
| |
|
| | **👉 Recommended Successor:** For superior performance, speed, and architectural coherence, please use the improved version: [**`johnnyboycurtis/ModernBERT-small-v2`**](https://huggingface.co/johnnyboycurtis/ModernBERT-small-v2). The successor model addresses these limitations via a more sophisticated Guided Weight Initialization (GUIDE) technique and specialized Knowledge Distillation tuning. |
| |
|
| | ## Model Details |
| |
|
| |
|
| | This is a shallow model with wide layers. *NOT RECOMMENDED* for production. This was my first attempt of training a ModernBERT model from scratch. The wide component is a mistake on my part due to lack of understanding of the `gegelu` design. |
| |
|
| | ModernBERT-small-1.5 will address the limitations of this design. |
| |
|
| | ``` |
| | small_modernbert_config = ModernBertConfig( |
| | hidden_size=384, # A common dimension for small embedding models |
| | num_hidden_layers=12, # Significantly fewer layers than the base's 22 |
| | num_attention_heads=6, # Must be a divisor of hidden_size |
| | intermediate_size=1536, # 4 * hidden_size -- VERY WIDE!! |
| | max_position_embeddings=1024, # Max sequence length for the model; originally 8192 |
| | ) |
| | |
| | model = ModernBertModel(modernbert_small_config) |
| | ``` |
| |
|
| | ### Model Description |
| | - **Model Type:** Sentence Transformer |
| | - **Base model:** Custom-trained ModernBERT-Small (trained from scratch) |
| | - **Architecture:** ModernBERT-Small |
| | - **Maximum Sequence Length:** 1024 tokens |
| | - **Output Dimensionality:** 384 dimensions |
| | - **Similarity Function:** Cosine Similarity |
| | - **Language:** en |
| | - **License:** MIT |
| |
|
| | ### Model Sources |
| |
|
| | - **Repository:** [ModernBERT Training Scripts](https://github.com/Johnnyboycurtis/semantic-search-models/tree/main/ModernBERT) |
| | - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) |
| | - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) |
| |
|
| | ### Training Procedure |
| |
|
| | This model was developed using a sophisticated, multi-stage "curriculum learning" approach to build a deep semantic understanding. The training scripts are available in the linked repository. |
| |
|
| | #### Stage 1: Foundational Contrastive Training |
| | The model was first trained on a large, diverse collection of over 1 million triplets from three different datasets. This stage taught the model a broad, foundational understanding of language, relevance, and logical relationships. |
| | - **Datasets:** |
| | - [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) |
| | - [sentence-transformers/trivia-qa-triplet](https://huggingface.co/datasets/sentence-transformers/trivia-qa-triplet) |
| | - [sentence-transformers/msmarco-msmarco-distilbert-base-v3](https://huggingface.co/datasets/sentence-transformers/msmarco-msmarco-distilbert-base-v3) |
| | - **Loss Function:** `MultipleNegativesRankingLoss` |
| |
|
| | #### Stage 2: Advanced Knowledge Distillation |
| | The foundational model was then refined by having it mimic a state-of-the-art teacher model ([BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)). This stage transferred the nuanced knowledge of the expert teacher to our more efficient student model. |
| | - **Teacher Model:** `BAAI/bge-base-en-v1.5` |
| | - **Loss Function:** `DistillKLDivLoss` |
| |
|
| | #### Stage 3: Task-Specific Fine-Tuning |
| | As a final "calibration" step, the best distilled model was fine-tuned directly on the Semantic Textual Similarity (STS) benchmark. This specializes the model for tasks requiring precise similarity scores. |
| | - **Dataset:** [sentence-transformers/stsb](https://huggingface.co/datasets/sentence-transformers/stsb) |
| | - **Loss Function:** `CosineSimilarityLoss` |
| |
|
| | ### Full Model Architecture |
| |
|
| | ``` |
| | SentenceTransformer( |
| | (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False, 'architecture': 'ModernBertModel'}) |
| | (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) |
| | ) |
| | ``` |
| |
|
| | ## Usage |
| |
|
| | ### Direct Usage (Sentence Transformers) |
| |
|
| | First install the Sentence Transformers library: |
| |
|
| | ```bash |
| | pip install -U sentence-transformers |
| | ``` |
| |
|
| | Then you can load this model and run inference. |
| | ```python |
| | from sentence_transformers import SentenceTransformer |
| | |
| | # Download from the 🤗 Hub |
| | model = SentenceTransformer("johnnyboycurtis/ModernBERT-small") |
| | # Run inference |
| | sentences = [ |
| | 'A girl is eating a cupcake.', |
| | 'A woman is eating a cupcake.', |
| | 'Zebras are socializing.', |
| | ] |
| | embeddings = model.encode(sentences) |
| | print(embeddings.shape) |
| | # [3, 384] |
| | |
| | # Get the similarity scores for the embeddings |
| | similarities = model.similarity(embeddings, embeddings) |
| | print(similarities) |
| | # tensor([[1.0000, 0.8201, 0.1449], |
| | # [0.8201, 1.0000, 0.1839], |
| | # [0.1449, 0.1839, 1.0000]]) |
| | ``` |
| |
|
| | <!-- |
| | ### Direct Usage (Transformers) |
| |
|
| | <details><summary>Click to see the direct usage in Transformers</summary> |
| |
|
| | </details> |
| | --> |
| |
|
| | <!-- |
| | ### Downstream Usage (Sentence Transformers) |
| |
|
| | You can finetune this model on your own dataset. |
| |
|
| | <details><summary>Click to expand</summary> |
| |
|
| | </details> |
| | --> |
| |
|
| | <!-- |
| | ### Out-of-Scope Use |
| |
|
| | *List how the model may foreseeably be misused and address what users ought not to do with the model.* |
| | --> |
| |
|
| | ## Evaluation |
| |
|
| | ### Metrics |
| |
|
| | #### Semantic Similarity |
| |
|
| | * Datasets: `sts-dev` and `sts-test` |
| | * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) |
| |
|
| | | Metric | sts-dev | sts-test | |
| | |:--------------------|:-----------|:-----------| |
| | | pearson_cosine | 0.7575 | 0.6939 | |
| | | **spearman_cosine** | **0.7563** | **0.6784** | |
| | |
| | <!-- |
| | ## Bias, Risks and Limitations |
| | |
| | *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
| | --> |
| | |
| | <!-- |
| | ### Recommendations |
| | |
| | *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
| | --> |
| | |
| | ## Training Details |
| | |
| | ### Training Dataset |
| | |
| | #### stsb |
| | |
| | * Dataset: [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308) |
| | * Size: 5,749 training samples |
| | * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> |
| | * Approximate statistics based on the first 1000 samples: |
| | | | sentence1 | sentence2 | score | |
| | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| |
| | | type | string | string | float | |
| | | details | <ul><li>min: 6 tokens</li><li>mean: 10.16 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.12 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> | |
| | * Samples: |
| | | sentence1 | sentence2 | score | |
| | |:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------| |
| | | <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> | |
| | | <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> | |
| | | <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> | |
| | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: |
| | ```json |
| | { |
| | "loss_fct": "torch.nn.modules.loss.MSELoss" |
| | } |
| | ``` |
| | |
| | ### Training Hyperparameters |
| | #### Non-Default Hyperparameters |
| | |
| | - `eval_strategy`: steps |
| | - `per_device_train_batch_size`: 16 |
| | - `per_device_eval_batch_size`: 16 |
| | - `learning_rate`: 2e-05 |
| | - `num_train_epochs`: 4 |
| | - `warmup_ratio`: 0.1 |
| | - `bf16`: True |
| | - `load_best_model_at_end`: True |
| | |
| | #### All Hyperparameters |
| | <details><summary>Click to expand</summary> |
| | |
| | - `overwrite_output_dir`: False |
| | - `do_predict`: False |
| | - `eval_strategy`: steps |
| | - `prediction_loss_only`: True |
| | - `per_device_train_batch_size`: 16 |
| | - `per_device_eval_batch_size`: 16 |
| | - `per_gpu_train_batch_size`: None |
| | - `per_gpu_eval_batch_size`: None |
| | - `gradient_accumulation_steps`: 1 |
| | - `eval_accumulation_steps`: None |
| | - `torch_empty_cache_steps`: None |
| | - `learning_rate`: 2e-05 |
| | - `weight_decay`: 0.0 |
| | - `adam_beta1`: 0.9 |
| | - `adam_beta2`: 0.999 |
| | - `adam_epsilon`: 1e-08 |
| | - `max_grad_norm`: 1.0 |
| | - `num_train_epochs`: 4 |
| | - `max_steps`: -1 |
| | - `lr_scheduler_type`: linear |
| | - `lr_scheduler_kwargs`: {} |
| | - `warmup_ratio`: 0.1 |
| | - `warmup_steps`: 0 |
| | - `log_level`: passive |
| | - `log_level_replica`: warning |
| | - `log_on_each_node`: True |
| | - `logging_nan_inf_filter`: True |
| | - `save_safetensors`: True |
| | - `save_on_each_node`: False |
| | - `save_only_model`: False |
| | - `restore_callback_states_from_checkpoint`: False |
| | - `no_cuda`: False |
| | - `use_cpu`: False |
| | - `use_mps_device`: False |
| | - `seed`: 42 |
| | - `data_seed`: None |
| | - `jit_mode_eval`: False |
| | - `use_ipex`: False |
| | - `bf16`: True |
| | - `fp16`: False |
| | - `fp16_opt_level`: O1 |
| | - `half_precision_backend`: auto |
| | - `bf16_full_eval`: False |
| | - `fp16_full_eval`: False |
| | - `tf32`: None |
| | - `local_rank`: 0 |
| | - `ddp_backend`: None |
| | - `tpu_num_cores`: None |
| | - `tpu_metrics_debug`: False |
| | - `debug`: [] |
| | - `dataloader_drop_last`: False |
| | - `dataloader_num_workers`: 0 |
| | - `dataloader_prefetch_factor`: None |
| | - `past_index`: -1 |
| | - `disable_tqdm`: False |
| | - `remove_unused_columns`: True |
| | - `label_names`: None |
| | - `load_best_model_at_end`: True |
| | - `ignore_data_skip`: False |
| | - `fsdp`: [] |
| | - `fsdp_min_num_params`: 0 |
| | - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
| | - `fsdp_transformer_layer_cls_to_wrap`: None |
| | - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
| | - `deepspeed`: None |
| | - `label_smoothing_factor`: 0.0 |
| | - `optim`: adamw_torch |
| | - `optim_args`: None |
| | - `adafactor`: False |
| | - `group_by_length`: False |
| | - `length_column_name`: length |
| | - `ddp_find_unused_parameters`: None |
| | - `ddp_bucket_cap_mb`: None |
| | - `ddp_broadcast_buffers`: False |
| | - `dataloader_pin_memory`: True |
| | - `dataloader_persistent_workers`: False |
| | - `skip_memory_metrics`: True |
| | - `use_legacy_prediction_loop`: False |
| | - `push_to_hub`: False |
| | - `resume_from_checkpoint`: None |
| | - `hub_model_id`: None |
| | - `hub_strategy`: every_save |
| | - `hub_private_repo`: None |
| | - `hub_always_push`: False |
| | - `gradient_checkpointing`: False |
| | - `gradient_checkpointing_kwargs`: None |
| | - `include_inputs_for_metrics`: False |
| | - `include_for_metrics`: [] |
| | - `eval_do_concat_batches`: True |
| | - `fp16_backend`: auto |
| | - `push_to_hub_model_id`: None |
| | - `push_to_hub_organization`: None |
| | - `mp_parameters`: |
| | - `auto_find_batch_size`: False |
| | - `full_determinism`: False |
| | - `torchdynamo`: None |
| | - `ray_scope`: last |
| | - `ddp_timeout`: 1800 |
| | - `torch_compile`: False |
| | - `torch_compile_backend`: None |
| | - `torch_compile_mode`: None |
| | - `include_tokens_per_second`: False |
| | - `include_num_input_tokens_seen`: False |
| | - `neftune_noise_alpha`: None |
| | - `optim_target_modules`: None |
| | - `batch_eval_metrics`: False |
| | - `eval_on_start`: False |
| | - `use_liger_kernel`: False |
| | - `eval_use_gather_object`: False |
| | - `average_tokens_across_devices`: False |
| | - `prompts`: None |
| | - `batch_sampler`: batch_sampler |
| | - `multi_dataset_batch_sampler`: proportional |
| | - `router_mapping`: {} |
| | - `learning_rate_mapping`: {} |
| | |
| | </details> |
| | |
| | ### Training Logs |
| | | Epoch | Step | Training Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine | |
| | |:----------:|:--------:|:-------------:|:-----------------------:|:------------------------:| |
| | | 0.2778 | 100 | 0.1535 | - | - | |
| | | 0.5556 | 200 | 0.068 | 0.7387 | - | |
| | | 0.8333 | 300 | 0.0446 | - | - | |
| | | 1.1111 | 400 | 0.0411 | 0.7511 | - | |
| | | 1.3889 | 500 | 0.0366 | - | - | |
| | | 1.6667 | 600 | 0.0425 | 0.7542 | - | |
| | | 1.9444 | 700 | 0.0402 | - | - | |
| | | 2.2222 | 800 | 0.0373 | 0.7563 | - | |
| | | 2.5 | 900 | 0.0374 | - | - | |
| | | 2.7778 | 1000 | 0.0384 | 0.7557 | - | |
| | | 3.0556 | 1100 | 0.0357 | - | - | |
| | | 3.3333 | 1200 | 0.0399 | 0.7562 | - | |
| | | 3.6111 | 1300 | 0.0358 | - | - | |
| | | **3.8889** | **1400** | **0.0338** | **0.7563** | **-** | |
| | | -1 | -1 | - | - | 0.6784 | |
| | |
| | * The bold row denotes the saved checkpoint. |
| | |
| | ### Framework Versions |
| | - Python: 3.13.4 |
| | - Sentence Transformers: 5.0.0 |
| | - Transformers: 4.52.4 |
| | - PyTorch: 2.7.1+cu128 |
| | - Accelerate: 1.7.0 |
| | - Datasets: 3.6.0 |
| | - Tokenizers: 0.21.1 |
| | |
| | ## Citation |
| | |
| | ### BibTeX |
| | |
| | #### Sentence Transformers |
| | ```bibtex |
| | @inproceedings{reimers-2019-sentence-bert, |
| | title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", |
| | author = "Reimers, Nils and Gurevych, Iryna", |
| | booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", |
| | month = "11", |
| | year = "2019", |
| | publisher = "Association for Computational Linguistics", |
| | url = "https://arxiv.org/abs/1908.10084", |
| | } |
| | ``` |
| | |
| | <!-- |
| | ## Glossary |
| |
|
| | *Clearly define terms in order to be accessible across audiences.* |
| | --> |
| |
|
| | <!-- |
| | ## Model Card Authors |
| |
|
| | *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
| | --> |
| |
|
| | <!-- |
| | ## Model Card Contact |
| |
|
| | *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
| | --> |