CrossEncoder based on cross-encoder/ms-marco-MiniLM-L12-v2

This is a Cross Encoder model finetuned from cross-encoder/ms-marco-MiniLM-L12-v2 using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.

Model Details

Model Description

Model Sources

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import CrossEncoder

# Download from the 🤗 Hub
model = CrossEncoder("cross_encoder_model_id")
# Get scores for pairs of texts
pairs = [
    ['the system must identify any risk profile that has expired and is currently marked as overdue to ensure ongoing suitability compliance.', "so, like, your portfolio risk profile is out of date, and i've got a flag here saying it needs renewal before we can do any new trades."],
    ['to identify risk misalignment trades, the system must flag a risk mismatch whenever the product risk rating exceeds the client risk profile.', "so, it's a solid choice, but i gotta mention, there's a bit of a risk mismatch between the fund's rating and your own suitability score, so it's a bit of a hurdle."],
    ['the system identifies an execution only wrapper when the order initiation confirms that this trade is performed on an execution only basis with no advice given.', "so... uh... let's just do it, but it's execution only, you know? no advice was provided, so you're on your own with the strategy on this one, i'm so rushed."],
    ['the system must identify any risk profile that has expired and is currently marked as overdue to ensure ongoing suitability compliance.', "hey, um, checking the dashboard here and it says your prp is overdue, you know, we haven't updated it in a bit and it's flagged."],
    ['to identify risk misalignment trades, the system must flag a risk mismatch whenever the product risk rating exceeds the client risk profile.', "don't worry about the specifics right now the main thing is getting the allocation because it's oversubscribed so can i confirm the trade"],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)

# Or rank different texts based on similarity to a single text
ranks = model.rank(
    'the system must identify any risk profile that has expired and is currently marked as overdue to ensure ongoing suitability compliance.',
    [
        "so, like, your portfolio risk profile is out of date, and i've got a flag here saying it needs renewal before we can do any new trades.",
        "so, it's a solid choice, but i gotta mention, there's a bit of a risk mismatch between the fund's rating and your own suitability score, so it's a bit of a hurdle.",
        "so... uh... let's just do it, but it's execution only, you know? no advice was provided, so you're on your own with the strategy on this one, i'm so rushed.",
        "hey, um, checking the dashboard here and it says your prp is overdue, you know, we haven't updated it in a bit and it's flagged.",
        "don't worry about the specifics right now the main thing is getting the allocation because it's oversubscribed so can i confirm the trade",
    ]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

Evaluation

Metrics

Cross Encoder Classification

Metric Value
accuracy 0.9636
accuracy_threshold -1.7519
f1 0.9663
f1_threshold -2.8692
precision 0.9556
recall 0.9773
average_precision 0.994

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,485 training samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 label
    type string string float
    details
    • min: 135 characters
    • mean: 302.95 characters
    • max: 725 characters
    • min: 97 characters
    • mean: 179.3 characters
    • max: 463 characters
    • min: 0.0
    • mean: 0.49
    • max: 1.0
  • Samples:
    sentence1 sentence2 label
    the rm must use the instrument_code to identify the soft lock disclosure and inform the client that 'this fund has a soft lock-up duration of xx months. you will be subjected to an early redemption charge of x% by the fund house if you were to redeem the fund within the soft lock-up period.' and, if applicable, that 'the fund is currently still within the soft lock-up period. should you wish to proceed with the redemption, you will incur an early redemption charge of x% by the fund house.' there's a bit of a soft lock on this one, you know, if you take the money out too soon there's a small charge, but it's no big deal. 0.0
    the system identifies an execution only wrapper when the order initiation confirms that this trade is performed on an execution only basis with no advice given. i can't believe how expensive flights have become lately, it's just ridiculous. let's just go ahead with that stock buy, i'll put it through as we discussed earlier, it’s a simple execution for us. 0.0
    for a client initiated (ci) wrapper where the order initiation is 'client initiated', the bank must confirm that 'this trade is based on your initiated interest in underlying and product type' or 'this trade is based on your initiated interest in underlying or product type'. exactly, i-i see what you mean, and since you're the one who initiated this conversation about the emerging markets fund, i'll just log that as your interest. did you ever get that classic car fixed up? 1.0
  • Loss: BinaryCrossEntropyLoss with these parameters:
    {
        "activation_fn": "torch.nn.modules.linear.Identity",
        "pos_weight": null
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 165 evaluation samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 165 samples:
    sentence1 sentence2 label
    type string string float
    details
    • min: 135 characters
    • mean: 302.44 characters
    • max: 725 characters
    • min: 97 characters
    • mean: 178.02 characters
    • max: 631 characters
    • min: 0.0
    • mean: 0.53
    • max: 1.0
  • Samples:
    sentence1 sentence2 label
    the system must identify any risk profile that has expired and is currently marked as overdue to ensure ongoing suitability compliance. so, like, your portfolio risk profile is out of date, and i've got a flag here saying it needs renewal before we can do any new trades. 1.0
    to identify risk misalignment trades, the system must flag a risk mismatch whenever the product risk rating exceeds the client risk profile. so, it's a solid choice, but i gotta mention, there's a bit of a risk mismatch between the fund's rating and your own suitability score, so it's a bit of a hurdle. 1.0
    the system identifies an execution only wrapper when the order initiation confirms that this trade is performed on an execution only basis with no advice given. so... uh... let's just do it, but it's execution only, you know? no advice was provided, so you're on your own with the strategy on this one, i'm so rushed. 1.0
  • Loss: BinaryCrossEntropyLoss with these parameters:
    {
        "activation_fn": "torch.nn.modules.linear.Identity",
        "pos_weight": null
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • warmup_ratio: 0.1
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss compliance-eval_average_precision
0.1075 10 1.9119 1.1985 0.6783
0.2151 20 0.9675 1.0970 0.6914
0.3226 30 0.7458 0.4725 0.8480
0.4301 40 0.5308 0.4431 0.8849
0.5376 50 0.3888 0.4183 0.9097
0.6452 60 0.3477 0.3472 0.9325
0.7527 70 0.3082 0.3005 0.9524
0.8602 80 0.3364 0.2682 0.9647
0.9677 90 0.3069 0.2345 0.9804
1.0753 100 0.2636 0.1847 0.9886
1.1828 110 0.2577 0.1793 0.9847
1.2903 120 0.1793 0.1940 0.9826
1.3978 130 0.19 0.2333 0.9794
1.5054 140 0.1788 0.1615 0.9858
1.6129 150 0.1277 0.1576 0.9862
1.7204 160 0.1851 0.1399 0.9903
1.8280 170 0.1652 0.1056 0.9947
1.9355 180 0.085 0.1077 0.9949
2.043 190 0.1111 0.0943 0.9955
2.1505 200 0.09 0.1137 0.9955
2.2581 210 0.1136 0.1222 0.9934
2.3656 220 0.0703 0.1155 0.9937
2.4731 230 0.0866 0.1147 0.9935
2.5806 240 0.1104 0.1089 0.9943
2.6882 250 0.1523 0.1141 0.9940
2.7957 260 0.1189 0.1297 0.9943
2.9032 270 0.0479 0.1365 0.9940
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.12
  • Sentence Transformers: 5.2.0
  • Transformers: 4.57.3
  • PyTorch: 2.9.0+cu126
  • Accelerate: 1.12.0
  • Datasets: 4.0.0
  • Tokenizers: 0.22.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
9
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for taido/minilm

Paper for taido/minilm

Evaluation results