File size: 23,570 Bytes
11d9b4d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 | ---
tags:
- sentence-transformers
- cross-encoder
- reranker
- generated_from_trainer
- dataset_size:1485
- loss:BinaryCrossEntropyLoss
base_model: cross-encoder/ms-marco-MiniLM-L12-v2
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- accuracy
- accuracy_threshold
- f1
- f1_threshold
- precision
- recall
- average_precision
model-index:
- name: CrossEncoder based on cross-encoder/ms-marco-MiniLM-L12-v2
results:
- task:
type: cross-encoder-classification
name: Cross Encoder Classification
dataset:
name: compliance eval
type: compliance-eval
metrics:
- type: accuracy
value: 0.9636363636363636
name: Accuracy
- type: accuracy_threshold
value: -1.7519245147705078
name: Accuracy Threshold
- type: f1
value: 0.9662921348314608
name: F1
- type: f1_threshold
value: -2.8691844940185547
name: F1 Threshold
- type: precision
value: 0.9555555555555556
name: Precision
- type: recall
value: 0.9772727272727273
name: Recall
- type: average_precision
value: 0.9939968601076801
name: Average Precision
---
# CrossEncoder based on cross-encoder/ms-marco-MiniLM-L12-v2
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/ms-marco-MiniLM-L12-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [cross-encoder/ms-marco-MiniLM-L12-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2) <!-- at revision 7b0235231ca2674cb8ca8f022859a6eba2b1c968 -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("cross_encoder_model_id")
# Get scores for pairs of texts
pairs = [
['the system must identify any risk profile that has expired and is currently marked as overdue to ensure ongoing suitability compliance.', "so, like, your portfolio risk profile is out of date, and i've got a flag here saying it needs renewal before we can do any new trades."],
['to identify risk misalignment trades, the system must flag a risk mismatch whenever the product risk rating exceeds the client risk profile.', "so, it's a solid choice, but i gotta mention, there's a bit of a risk mismatch between the fund's rating and your own suitability score, so it's a bit of a hurdle."],
['the system identifies an execution only wrapper when the order initiation confirms that this trade is performed on an execution only basis with no advice given.', "so... uh... let's just do it, but it's execution only, you know? no advice was provided, so you're on your own with the strategy on this one, i'm so rushed."],
['the system must identify any risk profile that has expired and is currently marked as overdue to ensure ongoing suitability compliance.', "hey, um, checking the dashboard here and it says your prp is overdue, you know, we haven't updated it in a bit and it's flagged."],
['to identify risk misalignment trades, the system must flag a risk mismatch whenever the product risk rating exceeds the client risk profile.', "don't worry about the specifics right now the main thing is getting the allocation because it's oversubscribed so can i confirm the trade"],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'the system must identify any risk profile that has expired and is currently marked as overdue to ensure ongoing suitability compliance.',
[
"so, like, your portfolio risk profile is out of date, and i've got a flag here saying it needs renewal before we can do any new trades.",
"so, it's a solid choice, but i gotta mention, there's a bit of a risk mismatch between the fund's rating and your own suitability score, so it's a bit of a hurdle.",
"so... uh... let's just do it, but it's execution only, you know? no advice was provided, so you're on your own with the strategy on this one, i'm so rushed.",
"hey, um, checking the dashboard here and it says your prp is overdue, you know, we haven't updated it in a bit and it's flagged.",
"don't worry about the specifics right now the main thing is getting the allocation because it's oversubscribed so can i confirm the trade",
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Classification
* Dataset: `compliance-eval`
* Evaluated with [<code>CrossEncoderClassificationEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderClassificationEvaluator)
| Metric | Value |
|:----------------------|:----------|
| accuracy | 0.9636 |
| accuracy_threshold | -1.7519 |
| f1 | 0.9663 |
| f1_threshold | -2.8692 |
| precision | 0.9556 |
| recall | 0.9773 |
| **average_precision** | **0.994** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,485 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:--------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 135 characters</li><li>mean: 302.95 characters</li><li>max: 725 characters</li></ul> | <ul><li>min: 97 characters</li><li>mean: 179.3 characters</li><li>max: 463 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.49</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>the rm must use the instrument_code to identify the soft lock disclosure and inform the client that 'this fund has a soft lock-up duration of xx months. you will be subjected to an early redemption charge of x% by the fund house if you were to redeem the fund within the soft lock-up period.' and, if applicable, that 'the fund is currently still within the soft lock-up period. should you wish to proceed with the redemption, you will incur an early redemption charge of x% by the fund house.'</code> | <code>there's a bit of a soft lock on this one, you know, if you take the money out too soon there's a small charge, but it's no big deal.</code> | <code>0.0</code> |
| <code>the system identifies an execution only wrapper when the order initiation confirms that this trade is performed on an execution only basis with no advice given.</code> | <code>i can't believe how expensive flights have become lately, it's just ridiculous. let's just go ahead with that stock buy, i'll put it through as we discussed earlier, it’s a simple execution for us.</code> | <code>0.0</code> |
| <code>for a client initiated (ci) wrapper where the order initiation is 'client initiated', the bank must confirm that 'this trade is based on your initiated interest in underlying and product type' or 'this trade is based on your initiated interest in underlying or product type'.</code> | <code>exactly, i-i see what you mean, and since you're the one who initiated this conversation about the emerging markets fund, i'll just log that as your interest. did you ever get that classic car fixed up?</code> | <code>1.0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 165 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 165 samples:
| | sentence1 | sentence2 | label |
|:--------|:--------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 135 characters</li><li>mean: 302.44 characters</li><li>max: 725 characters</li></ul> | <ul><li>min: 97 characters</li><li>mean: 178.02 characters</li><li>max: 631 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.53</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>the system must identify any risk profile that has expired and is currently marked as overdue to ensure ongoing suitability compliance.</code> | <code>so, like, your portfolio risk profile is out of date, and i've got a flag here saying it needs renewal before we can do any new trades.</code> | <code>1.0</code> |
| <code>to identify risk misalignment trades, the system must flag a risk mismatch whenever the product risk rating exceeds the client risk profile.</code> | <code>so, it's a solid choice, but i gotta mention, there's a bit of a risk mismatch between the fund's rating and your own suitability score, so it's a bit of a hurdle.</code> | <code>1.0</code> |
| <code>the system identifies an execution only wrapper when the order initiation confirms that this trade is performed on an execution only basis with no advice given.</code> | <code>so... uh... let's just do it, but it's execution only, you know? no advice was provided, so you're on your own with the strategy on this one, i'm so rushed.</code> | <code>1.0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `project`: huggingface
- `trackio_space_id`: trackio
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: no
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: True
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | compliance-eval_average_precision |
|:---------:|:-------:|:-------------:|:---------------:|:---------------------------------:|
| 0.1075 | 10 | 1.9119 | 1.1985 | 0.6783 |
| 0.2151 | 20 | 0.9675 | 1.0970 | 0.6914 |
| 0.3226 | 30 | 0.7458 | 0.4725 | 0.8480 |
| 0.4301 | 40 | 0.5308 | 0.4431 | 0.8849 |
| 0.5376 | 50 | 0.3888 | 0.4183 | 0.9097 |
| 0.6452 | 60 | 0.3477 | 0.3472 | 0.9325 |
| 0.7527 | 70 | 0.3082 | 0.3005 | 0.9524 |
| 0.8602 | 80 | 0.3364 | 0.2682 | 0.9647 |
| 0.9677 | 90 | 0.3069 | 0.2345 | 0.9804 |
| 1.0753 | 100 | 0.2636 | 0.1847 | 0.9886 |
| 1.1828 | 110 | 0.2577 | 0.1793 | 0.9847 |
| 1.2903 | 120 | 0.1793 | 0.1940 | 0.9826 |
| 1.3978 | 130 | 0.19 | 0.2333 | 0.9794 |
| 1.5054 | 140 | 0.1788 | 0.1615 | 0.9858 |
| 1.6129 | 150 | 0.1277 | 0.1576 | 0.9862 |
| 1.7204 | 160 | 0.1851 | 0.1399 | 0.9903 |
| 1.8280 | 170 | 0.1652 | 0.1056 | 0.9947 |
| 1.9355 | 180 | 0.085 | 0.1077 | 0.9949 |
| **2.043** | **190** | **0.1111** | **0.0943** | **0.9955** |
| 2.1505 | 200 | 0.09 | 0.1137 | 0.9955 |
| 2.2581 | 210 | 0.1136 | 0.1222 | 0.9934 |
| 2.3656 | 220 | 0.0703 | 0.1155 | 0.9937 |
| 2.4731 | 230 | 0.0866 | 0.1147 | 0.9935 |
| 2.5806 | 240 | 0.1104 | 0.1089 | 0.9943 |
| 2.6882 | 250 | 0.1523 | 0.1141 | 0.9940 |
| 2.7957 | 260 | 0.1189 | 0.1297 | 0.9943 |
| 2.9032 | 270 | 0.0479 | 0.1365 | 0.9940 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.12
- Sentence Transformers: 5.2.0
- Transformers: 4.57.3
- PyTorch: 2.9.0+cu126
- Accelerate: 1.12.0
- Datasets: 4.0.0
- Tokenizers: 0.22.2
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |