|
|
--- |
|
|
tags: |
|
|
- sentence-transformers |
|
|
- cross-encoder |
|
|
- reranker |
|
|
- generated_from_trainer |
|
|
- dataset_size:114138 |
|
|
- loss:BinaryCrossEntropyLoss |
|
|
base_model: cross-encoder/ms-marco-MiniLM-L6-v2 |
|
|
pipeline_tag: text-ranking |
|
|
library_name: sentence-transformers |
|
|
metrics: |
|
|
- accuracy |
|
|
- accuracy_threshold |
|
|
- f1 |
|
|
- f1_threshold |
|
|
- precision |
|
|
- recall |
|
|
- average_precision |
|
|
model-index: |
|
|
- name: CrossEncoder based on cross-encoder/ms-marco-MiniLM-L6-v2 |
|
|
results: |
|
|
- task: |
|
|
type: cross-encoder-binary-classification |
|
|
name: Cross Encoder Binary Classification |
|
|
dataset: |
|
|
name: eval |
|
|
type: eval |
|
|
metrics: |
|
|
- type: accuracy |
|
|
value: 0.8988329916416969 |
|
|
name: Accuracy |
|
|
- type: accuracy_threshold |
|
|
value: 0.10371464490890503 |
|
|
name: Accuracy Threshold |
|
|
- type: f1 |
|
|
value: 0.8317532549614461 |
|
|
name: F1 |
|
|
- type: f1_threshold |
|
|
value: -0.45371487736701965 |
|
|
name: F1 Threshold |
|
|
- type: precision |
|
|
value: 0.7977691561590688 |
|
|
name: Precision |
|
|
- type: recall |
|
|
value: 0.8687615526802218 |
|
|
name: Recall |
|
|
- type: average_precision |
|
|
value: 0.9072097927185474 |
|
|
name: Average Precision |
|
|
--- |
|
|
|
|
|
# CrossEncoder based on cross-encoder/ms-marco-MiniLM-L6-v2 |
|
|
|
|
|
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search. |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
- **Model Type:** Cross Encoder |
|
|
- **Base model:** [cross-encoder/ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2) <!-- at revision c5ee24cb16019beea0893ab7796b1df96625c6b8 --> |
|
|
- **Maximum Sequence Length:** 512 tokens |
|
|
- **Number of Output Labels:** 1 label |
|
|
<!-- - **Training Dataset:** Unknown --> |
|
|
<!-- - **Language:** Unknown --> |
|
|
<!-- - **License:** Unknown --> |
|
|
|
|
|
### Model Sources |
|
|
|
|
|
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net) |
|
|
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html) |
|
|
- **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers) |
|
|
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder) |
|
|
|
|
|
## Usage |
|
|
|
|
|
### Direct Usage (Sentence Transformers) |
|
|
|
|
|
First install the Sentence Transformers library: |
|
|
|
|
|
```bash |
|
|
pip install -U sentence-transformers |
|
|
``` |
|
|
|
|
|
Then you can load this model and run inference. |
|
|
```python |
|
|
from sentence_transformers import CrossEncoder |
|
|
|
|
|
# Download from the 🤗 Hub |
|
|
model = CrossEncoder("cross_encoder_model_id") |
|
|
# Get scores for pairs of texts |
|
|
pairs = [ |
|
|
['The item is a promotional display featuring a variety of phone cases, including solid blue cases, cases with artistic designs, and one showcasing a kitten wearing a Santa hat.', 'A black phone case.'], |
|
|
['It was a black umbrella with a loop.', 'A new, mustard-yellow, waffle-knit long-sleeved henley shirt features a three-button placket, a chest pocket with a "Custom Supply" label, and an "L.O.G.G." tag at the neckline.'], |
|
|
['A white sneaker with black, pink, and silver accents.', 'A blue backpack has an orange and white front with black straps.'], |
|
|
['Oh, that sleek white TYESO tumbler with the silver top, I was just about to try it out for keeping my coffee warm all day.', 'It is a white, metal TYESO brand vacuum-insulated bottle/mug with a silver rim and a black lid with a clear straw.'], |
|
|
['It is a bright orange backpack with a small pink strawberry charm.', 'The medium-sized black backpack, likely made of nylon or a similar synthetic material, features a white rectangular tag with "MUSIC IS POWER" printed on it and appears to be in good condition.'], |
|
|
] |
|
|
scores = model.predict(pairs) |
|
|
print(scores.shape) |
|
|
# (5,) |
|
|
|
|
|
# Or rank different texts based on similarity to a single text |
|
|
ranks = model.rank( |
|
|
'The item is a promotional display featuring a variety of phone cases, including solid blue cases, cases with artistic designs, and one showcasing a kitten wearing a Santa hat.', |
|
|
[ |
|
|
'A black phone case.', |
|
|
'A new, mustard-yellow, waffle-knit long-sleeved henley shirt features a three-button placket, a chest pocket with a "Custom Supply" label, and an "L.O.G.G." tag at the neckline.', |
|
|
'A blue backpack has an orange and white front with black straps.', |
|
|
'It is a white, metal TYESO brand vacuum-insulated bottle/mug with a silver rim and a black lid with a clear straw.', |
|
|
'The medium-sized black backpack, likely made of nylon or a similar synthetic material, features a white rectangular tag with "MUSIC IS POWER" printed on it and appears to be in good condition.', |
|
|
] |
|
|
) |
|
|
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...] |
|
|
``` |
|
|
|
|
|
<!-- |
|
|
### Direct Usage (Transformers) |
|
|
|
|
|
<details><summary>Click to see the direct usage in Transformers</summary> |
|
|
|
|
|
</details> |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
### Downstream Usage (Sentence Transformers) |
|
|
|
|
|
You can finetune this model on your own dataset. |
|
|
|
|
|
<details><summary>Click to expand</summary> |
|
|
|
|
|
</details> |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
### Out-of-Scope Use |
|
|
|
|
|
*List how the model may foreseeably be misused and address what users ought not to do with the model.* |
|
|
--> |
|
|
|
|
|
## Evaluation |
|
|
|
|
|
### Metrics |
|
|
|
|
|
#### Cross Encoder Binary Classification |
|
|
|
|
|
* Dataset: `eval` |
|
|
* Evaluated with [<code>CEBinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CEBinaryClassificationEvaluator) |
|
|
|
|
|
| Metric | Value | |
|
|
|:----------------------|:-----------| |
|
|
| accuracy | 0.8988 | |
|
|
| accuracy_threshold | 0.1037 | |
|
|
| f1 | 0.8318 | |
|
|
| f1_threshold | -0.4537 | |
|
|
| precision | 0.7978 | |
|
|
| recall | 0.8688 | |
|
|
| **average_precision** | **0.9072** | |
|
|
|
|
|
<!-- |
|
|
## Bias, Risks and Limitations |
|
|
|
|
|
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
### Recommendations |
|
|
|
|
|
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
|
|
--> |
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Training Dataset |
|
|
|
|
|
#### Unnamed Dataset |
|
|
|
|
|
* Size: 114,138 training samples |
|
|
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> |
|
|
* Approximate statistics based on the first 1000 samples: |
|
|
| | sentence_0 | sentence_1 | label | |
|
|
|:--------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:--------------------------------------------------------------| |
|
|
| type | string | string | float | |
|
|
| details | <ul><li>min: 15 characters</li><li>mean: 106.73 characters</li><li>max: 361 characters</li></ul> | <ul><li>min: 14 characters</li><li>mean: 110.94 characters</li><li>max: 403 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.3</li><li>max: 1.0</li></ul> | |
|
|
* Samples: |
|
|
| sentence_0 | sentence_1 | label | |
|
|
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------| |
|
|
| <code>The item is a promotional display featuring a variety of phone cases, including solid blue cases, cases with artistic designs, and one showcasing a kitten wearing a Santa hat.</code> | <code>A black phone case.</code> | <code>0.0</code> | |
|
|
| <code>It was a black umbrella with a loop.</code> | <code>A new, mustard-yellow, waffle-knit long-sleeved henley shirt features a three-button placket, a chest pocket with a "Custom Supply" label, and an "L.O.G.G." tag at the neckline.</code> | <code>0.0</code> | |
|
|
| <code>A white sneaker with black, pink, and silver accents.</code> | <code>A blue backpack has an orange and white front with black straps.</code> | <code>0.0</code> | |
|
|
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters: |
|
|
```json |
|
|
{ |
|
|
"activation_fn": "torch.nn.modules.linear.Identity", |
|
|
"pos_weight": null |
|
|
} |
|
|
``` |
|
|
|
|
|
### Training Hyperparameters |
|
|
#### Non-Default Hyperparameters |
|
|
|
|
|
- `eval_strategy`: steps |
|
|
- `per_device_train_batch_size`: 16 |
|
|
- `per_device_eval_batch_size`: 16 |
|
|
|
|
|
#### All Hyperparameters |
|
|
<details><summary>Click to expand</summary> |
|
|
|
|
|
- `overwrite_output_dir`: False |
|
|
- `do_predict`: False |
|
|
- `eval_strategy`: steps |
|
|
- `prediction_loss_only`: True |
|
|
- `per_device_train_batch_size`: 16 |
|
|
- `per_device_eval_batch_size`: 16 |
|
|
- `per_gpu_train_batch_size`: None |
|
|
- `per_gpu_eval_batch_size`: None |
|
|
- `gradient_accumulation_steps`: 1 |
|
|
- `eval_accumulation_steps`: None |
|
|
- `torch_empty_cache_steps`: None |
|
|
- `learning_rate`: 5e-05 |
|
|
- `weight_decay`: 0.0 |
|
|
- `adam_beta1`: 0.9 |
|
|
- `adam_beta2`: 0.999 |
|
|
- `adam_epsilon`: 1e-08 |
|
|
- `max_grad_norm`: 1 |
|
|
- `num_train_epochs`: 3 |
|
|
- `max_steps`: -1 |
|
|
- `lr_scheduler_type`: linear |
|
|
- `lr_scheduler_kwargs`: {} |
|
|
- `warmup_ratio`: 0.0 |
|
|
- `warmup_steps`: 0 |
|
|
- `log_level`: passive |
|
|
- `log_level_replica`: warning |
|
|
- `log_on_each_node`: True |
|
|
- `logging_nan_inf_filter`: True |
|
|
- `save_safetensors`: True |
|
|
- `save_on_each_node`: False |
|
|
- `save_only_model`: False |
|
|
- `restore_callback_states_from_checkpoint`: False |
|
|
- `no_cuda`: False |
|
|
- `use_cpu`: False |
|
|
- `use_mps_device`: False |
|
|
- `seed`: 42 |
|
|
- `data_seed`: None |
|
|
- `jit_mode_eval`: False |
|
|
- `bf16`: False |
|
|
- `fp16`: False |
|
|
- `fp16_opt_level`: O1 |
|
|
- `half_precision_backend`: auto |
|
|
- `bf16_full_eval`: False |
|
|
- `fp16_full_eval`: False |
|
|
- `tf32`: None |
|
|
- `local_rank`: 0 |
|
|
- `ddp_backend`: None |
|
|
- `tpu_num_cores`: None |
|
|
- `tpu_metrics_debug`: False |
|
|
- `debug`: [] |
|
|
- `dataloader_drop_last`: False |
|
|
- `dataloader_num_workers`: 0 |
|
|
- `dataloader_prefetch_factor`: None |
|
|
- `past_index`: -1 |
|
|
- `disable_tqdm`: False |
|
|
- `remove_unused_columns`: True |
|
|
- `label_names`: None |
|
|
- `load_best_model_at_end`: False |
|
|
- `ignore_data_skip`: False |
|
|
- `fsdp`: [] |
|
|
- `fsdp_min_num_params`: 0 |
|
|
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
|
|
- `fsdp_transformer_layer_cls_to_wrap`: None |
|
|
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
|
|
- `parallelism_config`: None |
|
|
- `deepspeed`: None |
|
|
- `label_smoothing_factor`: 0.0 |
|
|
- `optim`: adamw_torch_fused |
|
|
- `optim_args`: None |
|
|
- `adafactor`: False |
|
|
- `group_by_length`: False |
|
|
- `length_column_name`: length |
|
|
- `project`: huggingface |
|
|
- `trackio_space_id`: trackio |
|
|
- `ddp_find_unused_parameters`: None |
|
|
- `ddp_bucket_cap_mb`: None |
|
|
- `ddp_broadcast_buffers`: False |
|
|
- `dataloader_pin_memory`: True |
|
|
- `dataloader_persistent_workers`: False |
|
|
- `skip_memory_metrics`: True |
|
|
- `use_legacy_prediction_loop`: False |
|
|
- `push_to_hub`: False |
|
|
- `resume_from_checkpoint`: None |
|
|
- `hub_model_id`: None |
|
|
- `hub_strategy`: every_save |
|
|
- `hub_private_repo`: None |
|
|
- `hub_always_push`: False |
|
|
- `hub_revision`: None |
|
|
- `gradient_checkpointing`: False |
|
|
- `gradient_checkpointing_kwargs`: None |
|
|
- `include_inputs_for_metrics`: False |
|
|
- `include_for_metrics`: [] |
|
|
- `eval_do_concat_batches`: True |
|
|
- `fp16_backend`: auto |
|
|
- `push_to_hub_model_id`: None |
|
|
- `push_to_hub_organization`: None |
|
|
- `mp_parameters`: |
|
|
- `auto_find_batch_size`: False |
|
|
- `full_determinism`: False |
|
|
- `torchdynamo`: None |
|
|
- `ray_scope`: last |
|
|
- `ddp_timeout`: 1800 |
|
|
- `torch_compile`: False |
|
|
- `torch_compile_backend`: None |
|
|
- `torch_compile_mode`: None |
|
|
- `include_tokens_per_second`: False |
|
|
- `include_num_input_tokens_seen`: no |
|
|
- `neftune_noise_alpha`: None |
|
|
- `optim_target_modules`: None |
|
|
- `batch_eval_metrics`: False |
|
|
- `eval_on_start`: False |
|
|
- `use_liger_kernel`: False |
|
|
- `liger_kernel_config`: None |
|
|
- `eval_use_gather_object`: False |
|
|
- `average_tokens_across_devices`: True |
|
|
- `prompts`: None |
|
|
- `batch_sampler`: batch_sampler |
|
|
- `multi_dataset_batch_sampler`: proportional |
|
|
- `router_mapping`: {} |
|
|
- `learning_rate_mapping`: {} |
|
|
|
|
|
</details> |
|
|
|
|
|
### Training Logs |
|
|
| Epoch | Step | Training Loss | eval_average_precision | |
|
|
|:------:|:-----:|:-------------:|:----------------------:| |
|
|
| 0.0701 | 500 | 0.414 | 0.8339 | |
|
|
| 0.1402 | 1000 | 0.3334 | 0.8344 | |
|
|
| 0.2103 | 1500 | 0.2989 | 0.8549 | |
|
|
| 0.2803 | 2000 | 0.2984 | 0.8596 | |
|
|
| 0.3504 | 2500 | 0.2921 | 0.8707 | |
|
|
| 0.4205 | 3000 | 0.2882 | 0.8734 | |
|
|
| 0.4906 | 3500 | 0.2831 | 0.8802 | |
|
|
| 0.5607 | 4000 | 0.2878 | 0.8828 | |
|
|
| 0.6308 | 4500 | 0.2651 | 0.8857 | |
|
|
| 0.7009 | 5000 | 0.2693 | 0.8854 | |
|
|
| 0.7710 | 5500 | 0.2731 | 0.8876 | |
|
|
| 0.8410 | 6000 | 0.2666 | 0.8905 | |
|
|
| 0.9111 | 6500 | 0.2594 | 0.8925 | |
|
|
| 0.9812 | 7000 | 0.2631 | 0.8956 | |
|
|
| 1.0 | 7134 | - | 0.8921 | |
|
|
| 1.0513 | 7500 | 0.2434 | 0.8955 | |
|
|
| 1.1214 | 8000 | 0.2374 | 0.8969 | |
|
|
| 1.1915 | 8500 | 0.2197 | 0.8962 | |
|
|
| 1.2616 | 9000 | 0.2487 | 0.8980 | |
|
|
| 1.3317 | 9500 | 0.2406 | 0.8990 | |
|
|
| 1.4017 | 10000 | 0.2384 | 0.8995 | |
|
|
| 1.4718 | 10500 | 0.2339 | 0.9021 | |
|
|
| 1.5419 | 11000 | 0.2292 | 0.9034 | |
|
|
| 1.6120 | 11500 | 0.2214 | 0.9046 | |
|
|
| 1.6821 | 12000 | 0.2264 | 0.9049 | |
|
|
| 1.7522 | 12500 | 0.2384 | 0.9058 | |
|
|
| 1.8223 | 13000 | 0.2309 | 0.9072 | |
|
|
|
|
|
|
|
|
### Framework Versions |
|
|
- Python: 3.12.10 |
|
|
- Sentence Transformers: 5.1.2 |
|
|
- Transformers: 4.57.1 |
|
|
- PyTorch: 2.9.1+cu128 |
|
|
- Accelerate: 1.11.0 |
|
|
- Datasets: 4.4.1 |
|
|
- Tokenizers: 0.22.1 |
|
|
|
|
|
## Citation |
|
|
|
|
|
### BibTeX |
|
|
|
|
|
#### Sentence Transformers |
|
|
```bibtex |
|
|
@inproceedings{reimers-2019-sentence-bert, |
|
|
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", |
|
|
author = "Reimers, Nils and Gurevych, Iryna", |
|
|
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", |
|
|
month = "11", |
|
|
year = "2019", |
|
|
publisher = "Association for Computational Linguistics", |
|
|
url = "https://arxiv.org/abs/1908.10084", |
|
|
} |
|
|
``` |
|
|
|
|
|
<!-- |
|
|
## Glossary |
|
|
|
|
|
*Clearly define terms in order to be accessible across audiences.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
## Model Card Authors |
|
|
|
|
|
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
## Model Card Contact |
|
|
|
|
|
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
|
|
--> |