modernbert-cosqa / README.md
benjamintli's picture
End of training
8d7c40a verified
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:8118
- loss:CachedMultipleNegativesRankingLoss
base_model: benjamintli/modernbert-cosqa
widget:
- source_sentence: python create path if doesnt exist
sentences:
- "def clean_whitespace(string, compact=False):\n \"\"\"Return string with compressed\
\ whitespace.\"\"\"\n for a, b in (('\\r\\n', '\\n'), ('\\r', '\\n'), ('\\\
n\\n', '\\n'),\n ('\\t', ' '), (' ', ' ')):\n string =\
\ string.replace(a, b)\n if compact:\n for a, b in (('\\n', ' '), ('[\
\ ', '['),\n (' ', ' '), (' ', ' '), (' ', ' ')):\n \
\ string = string.replace(a, b)\n return string.strip()"
- "def rotateImage(img, angle):\n \"\"\"\n\n querries scipy.ndimage.rotate\
\ routine\n :param img: image to be rotated\n :param angle: angle to be\
\ rotated (radian)\n :return: rotated image\n \"\"\"\n imgR = scipy.ndimage.rotate(img,\
\ angle, reshape=False)\n return imgR"
- "def check_create_folder(filename):\n \"\"\"Check if the folder exisits. If\
\ not, create the folder\"\"\"\n os.makedirs(os.path.dirname(filename), exist_ok=True)"
- source_sentence: how decompiled python code looks like
sentences:
- "def xeval(source, optimize=True):\n \"\"\"Compiles to native Python bytecode\
\ and runs program, returning the\n topmost value on the stack.\n\n Args:\n\
\ optimize: Whether to optimize the code after parsing it.\n\n Returns:\n\
\ None: If the stack is empty\n obj: If the stack contains a single\
\ value\n [obj, obj, ...]: If the stack contains many values\n \"\"\"\
\n native = xcompile(source, optimize=optimize)\n return native()"
- "def html(header_rows):\n \"\"\"\n Convert a list of tuples describing a\
\ table into a HTML string\n \"\"\"\n name = 'table%d' % next(tablecounter)\n\
\ return HtmlTable([map(str, row) for row in header_rows], name).render()"
- "def cint8_array_to_numpy(cptr, length):\n \"\"\"Convert a ctypes int pointer\
\ array to a numpy array.\"\"\"\n if isinstance(cptr, ctypes.POINTER(ctypes.c_int8)):\n\
\ return np.fromiter(cptr, dtype=np.int8, count=length)\n else:\n \
\ raise RuntimeError('Expected int pointer')"
- source_sentence: python calling pytest from a python script
sentences:
- "def draw_image(self, ax, image):\n \"\"\"Process a matplotlib image object\
\ and call renderer.draw_image\"\"\"\n self.renderer.draw_image(imdata=utils.image_to_base64(image),\n\
\ extent=image.get_extent(),\n \
\ coordinates=\"data\",\n style={\"\
alpha\": image.get_alpha(),\n \"zorder\"\
: image.get_zorder()},\n mplobj=image)"
- "def test(): # pragma: no cover\n \"\"\"Execute the unit tests on an installed\
\ copy of unyt.\n\n Note that this function requires pytest to run. If pytest\
\ is not\n installed this function will raise ImportError.\n \"\"\"\n \
\ import pytest\n import os\n\n pytest.main([os.path.dirname(os.path.abspath(__file__))])"
- "def is_int(string):\n \"\"\"\n Checks if a string is an integer. If the\
\ string value is an integer\n return True, otherwise return False. \n \n\
\ Args:\n string: a string to test.\n\n Returns: \n boolean\n\
\ \"\"\"\n try:\n a = float(string)\n b = int(a)\n except\
\ ValueError:\n return False\n else:\n return a == b"
- source_sentence: python datetime get last day in a month
sentences:
- "def upgrade(directory, sql, tag, x_arg, revision):\n \"\"\"Upgrade to a later\
\ version\"\"\"\n _upgrade(directory, revision, sql, tag, x_arg)"
- "def flat_list(lst):\n \"\"\"This function flatten given nested list.\n \
\ Argument:\n nested list\n Returns:\n flat list\n \"\"\"\n\
\ if isinstance(lst, list):\n for item in lst:\n for i in\
\ flat_list(item):\n yield i\n else:\n yield lst"
- "def get_last_weekday_in_month(year, month, weekday):\n \"\"\"Get the last\
\ weekday in a given month. e.g:\n\n >>> # the last monday in Jan 2013\n\
\ >>> Calendar.get_last_weekday_in_month(2013, 1, MON)\n datetime.date(2013,\
\ 1, 28)\n \"\"\"\n day = date(year, month, monthrange(year, month)[1])\n\
\ while True:\n if day.weekday() == weekday:\n \
\ break\n day = day - timedelta(days=1)\n return day"
- source_sentence: first duplicate element in list in python
sentences:
- "def python_mime(fn):\n \"\"\"\n Decorator, which adds correct MIME type\
\ for python source to the decorated\n bottle API function.\n \"\"\"\n \
\ @wraps(fn)\n def python_mime_decorator(*args, **kwargs):\n response.content_type\
\ = \"text/x-python\"\n\n return fn(*args, **kwargs)\n\n return python_mime_decorator"
- "def purge_duplicates(list_in):\n \"\"\"Remove duplicates from list while preserving\
\ order.\n\n Parameters\n ----------\n list_in: Iterable\n\n Returns\n\
\ -------\n list\n List of first occurences in order\n \"\"\"\n\
\ _list = []\n for item in list_in:\n if item not in _list:\n \
\ _list.append(item)\n return _list"
- "def getRect(self):\n\t\t\"\"\"\n\t\tReturns the window bounds as a tuple of (x,y,w,h)\n\
\t\t\"\"\"\n\t\treturn (self.x, self.y, self.w, self.h)"
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on benjamintli/modernbert-cosqa
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: eval
type: eval
metrics:
- type: cosine_accuracy@1
value: 0.6197339246119734
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.88470066518847
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9390243902439024
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9778270509977827
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6197339246119734
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.29490022172949004
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18780487804878046
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0977827050997783
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6197339246119734
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.88470066518847
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9390243902439024
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9778270509977827
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8124675617500997
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7577473339668463
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7588050805217604
name: Cosine Map@100
---
# SentenceTransformer based on benjamintli/modernbert-cosqa
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [benjamintli/modernbert-cosqa](https://huggingface.co/benjamintli/modernbert-cosqa). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [benjamintli/modernbert-cosqa](https://huggingface.co/benjamintli/modernbert-cosqa) <!-- at revision c85b25617894d583fafad7eb7421b7dc0aab0ad9 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'OptimizedModule'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("modernbert-cosqa")
# Run inference
queries = [
"first duplicate element in list in python",
]
documents = [
'def purge_duplicates(list_in):\n """Remove duplicates from list while preserving order.\n\n Parameters\n ----------\n list_in: Iterable\n\n Returns\n -------\n list\n List of first occurences in order\n """\n _list = []\n for item in list_in:\n if item not in _list:\n _list.append(item)\n return _list',
'def getRect(self):\n\t\t"""\n\t\tReturns the window bounds as a tuple of (x,y,w,h)\n\t\t"""\n\t\treturn (self.x, self.y, self.w, self.h)',
'def python_mime(fn):\n """\n Decorator, which adds correct MIME type for python source to the decorated\n bottle API function.\n """\n @wraps(fn)\n def python_mime_decorator(*args, **kwargs):\n response.content_type = "text/x-python"\n\n return fn(*args, **kwargs)\n\n return python_mime_decorator',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[ 0.5986, -0.0006, -0.0122]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `eval`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6197 |
| cosine_accuracy@3 | 0.8847 |
| cosine_accuracy@5 | 0.939 |
| cosine_accuracy@10 | 0.9778 |
| cosine_precision@1 | 0.6197 |
| cosine_precision@3 | 0.2949 |
| cosine_precision@5 | 0.1878 |
| cosine_precision@10 | 0.0978 |
| cosine_recall@1 | 0.6197 |
| cosine_recall@3 | 0.8847 |
| cosine_recall@5 | 0.939 |
| cosine_recall@10 | 0.9778 |
| **cosine_ndcg@10** | **0.8125** |
| cosine_mrr@10 | 0.7577 |
| cosine_map@100 | 0.7588 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 8,118 training samples
* Columns: <code>query</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive |
|:--------|:--------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.3 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 85.05 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | positive |
|:--------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>python code for opening geojson file</code> | <code>def _loadfilepath(self, filepath, **kwargs):<br> """This loads a geojson file into a geojson python<br> dictionary using the json module.<br> <br> Note: to load with a different text encoding use the encoding argument.<br> """<br> with open(filepath, "r") as f:<br> data = json.load(f, **kwargs)<br> return data</code> |
| <code>python 3 none compare with int</code> | <code>def is_natural(x):<br> """A non-negative integer."""<br> try:<br> is_integer = int(x) == x<br> except (TypeError, ValueError):<br> return False<br> return is_integer and x >= 0</code> |
| <code>design db memory cache python</code> | <code>def refresh(self, document):<br> """ Load a new copy of a document from the database. does not<br> replace the old one """<br> try:<br> old_cache_size = self.cache_size<br> self.cache_size = 0<br> obj = self.query(type(document)).filter_by(mongo_id=document.mongo_id).one()<br> finally:<br> self.cache_size = old_cache_size<br> self.cache_write(obj)<br> return obj</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 64,
"gather_across_devices": false,
"directions": [
"query_to_doc"
],
"partition_mode": "joint",
"hardness_mode": null,
"hardness_strength": 0.0
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 902 evaluation samples
* Columns: <code>query</code> and <code>positive</code>
* Approximate statistics based on the first 902 samples:
| | query | positive |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.24 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 86.55 tokens</li><li>max: 332 tokens</li></ul> |
* Samples:
| query | positive |
|:--------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>how to remove masked items in python array</code> | <code>def ma(self):<br> """Represent data as a masked array.<br><br> The array is returned with column-first indexing, i.e. for a data file with<br> columns X Y1 Y2 Y3 ... the array a will be a[0] = X, a[1] = Y1, ... .<br><br> inf and nan are filtered via :func:`numpy.isfinite`.<br> """<br> a = self.array<br> return numpy.ma.MaskedArray(a, mask=numpy.logical_not(numpy.isfinite(a)))</code> |
| <code>python deepcopy basic type</code> | <code>def __deepcopy__(self, memo):<br> """Improve deepcopy speed."""<br> return type(self)(value=self._value, enum_ref=self.enum_ref)</code> |
| <code>python number of non nan rows in a row</code> | <code>def count_rows_with_nans(X):<br> """Count the number of rows in 2D arrays that contain any nan values."""<br> if X.ndim == 2:<br> return np.where(np.isnan(X).sum(axis=1) != 0, 1, 0).sum()</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 64,
"gather_across_devices": false,
"directions": [
"query_to_doc"
],
"partition_mode": "joint",
"hardness_mode": null,
"hardness_strength": 0.0
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 1024
- `num_train_epochs`: 10
- `learning_rate`: 2e-06
- `warmup_steps`: 0.1
- `bf16`: True
- `eval_strategy`: epoch
- `per_device_eval_batch_size`: 1024
- `push_to_hub`: True
- `hub_model_id`: modernbert-cosqa
- `load_best_model_at_end`: True
- `dataloader_num_workers`: 4
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `per_device_train_batch_size`: 1024
- `num_train_epochs`: 10
- `max_steps`: -1
- `learning_rate`: 2e-06
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: None
- `warmup_steps`: 0.1
- `optim`: adamw_torch_fused
- `optim_args`: None
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `optim_target_modules`: None
- `gradient_accumulation_steps`: 1
- `average_tokens_across_devices`: True
- `max_grad_norm`: 1.0
- `label_smoothing_factor`: 0.0
- `bf16`: True
- `fp16`: False
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `use_cache`: False
- `neftune_noise_alpha`: None
- `torch_empty_cache_steps`: None
- `auto_find_batch_size`: False
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `include_num_input_tokens_seen`: no
- `log_level`: passive
- `log_level_replica`: warning
- `disable_tqdm`: False
- `project`: huggingface
- `trackio_space_id`: trackio
- `eval_strategy`: epoch
- `per_device_eval_batch_size`: 1024
- `prediction_loss_only`: True
- `eval_on_start`: False
- `eval_do_concat_batches`: True
- `eval_use_gather_object`: False
- `eval_accumulation_steps`: None
- `include_for_metrics`: []
- `batch_eval_metrics`: False
- `save_only_model`: False
- `save_on_each_node`: False
- `enable_jit_checkpoint`: False
- `push_to_hub`: True
- `hub_private_repo`: None
- `hub_model_id`: modernbert-cosqa
- `hub_strategy`: every_save
- `hub_always_push`: False
- `hub_revision`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `restore_callback_states_from_checkpoint`: False
- `full_determinism`: False
- `seed`: 42
- `data_seed`: None
- `use_cpu`: False
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 4
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `dataloader_prefetch_factor`: None
- `remove_unused_columns`: True
- `label_names`: None
- `train_sampling_strategy`: random
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `ddp_backend`: None
- `ddp_timeout`: 1800
- `fsdp`: []
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `deepspeed`: None
- `debug`: []
- `skip_memory_metrics`: True
- `do_predict`: False
- `resume_from_checkpoint`: None
- `warmup_ratio`: None
- `local_rank`: -1
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | eval_cosine_ndcg@10 |
|:-------:|:------:|:-------------:|:---------------:|:-------------------:|
| 1.0 | 8 | - | 0.3550 | 0.8071 |
| 1.25 | 10 | 1.0218 | - | - |
| 2.0 | 16 | - | 0.3508 | 0.8110 |
| 2.5 | 20 | 0.9890 | - | - |
| 3.0 | 24 | - | 0.3466 | 0.8131 |
| 3.75 | 30 | 0.9778 | - | - |
| 4.0 | 32 | - | 0.3439 | 0.8136 |
| **5.0** | **40** | **0.9507** | **0.3417** | **0.8148** |
| 6.0 | 48 | - | 0.3404 | 0.8120 |
| 6.25 | 50 | 0.9429 | - | - |
| 7.0 | 56 | - | 0.3387 | 0.8131 |
| 7.5 | 60 | 0.9267 | - | - |
| 8.0 | 64 | - | 0.3378 | 0.8127 |
| 8.75 | 70 | 0.9396 | - | - |
| 9.0 | 72 | - | 0.3370 | 0.8106 |
| 10.0 | 80 | 0.9099 | 0.3366 | 0.8125 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.12
- Sentence Transformers: 5.3.0
- Transformers: 5.3.0
- PyTorch: 2.10.0+cu128
- Accelerate: 1.13.0
- Datasets: 4.8.2
- Tokenizers: 0.22.2
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->