CrossEncoder based on distilbert/distilroberta-base
This is a Cross Encoder model finetuned from distilbert/distilroberta-base on the all-nli dataset using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text pair classification.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: distilbert/distilroberta-base
- Maximum Sequence Length: 514 tokens
- Number of Output Labels: 3 labels
- Training Dataset:
- Language: en
Model Sources
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
model = CrossEncoder("hajimeni/reranker-distilroberta-base-nli")
pairs = [
['Two women are embracing while holding to go packages.', 'The sisters are hugging goodbye while holding to go packages after just eating lunch.'],
['Two women are embracing while holding to go packages.', 'Two woman are holding packages.'],
['Two women are embracing while holding to go packages.', 'The men are fighting outside a deli.'],
['Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.', 'Two kids in numbered jerseys wash their hands.'],
['Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.', 'Two kids at a ballgame wash their hands.'],
]
scores = model.predict(pairs)
print(scores.shape)
Evaluation
Metrics
Cross Encoder Classification
| Metric |
AllNLI-dev |
AllNLI-test |
| f1_macro |
0.8472 |
0.7673 |
| f1_micro |
0.848 |
0.7679 |
| f1_weighted |
0.8472 |
0.7682 |
Training Details
Training Dataset
all-nli
- Dataset: all-nli at d482672
- Size: 100,000 training samples
- Columns:
premise, hypothesis, and label
- Approximate statistics based on the first 1000 samples:
|
premise |
hypothesis |
label |
| type |
string |
string |
int |
| details |
- min: 23 characters
- mean: 69.54 characters
- max: 227 characters
|
- min: 11 characters
- mean: 38.26 characters
- max: 131 characters
|
- 0: ~33.40%
- 1: ~33.30%
- 2: ~33.30%
|
- Samples:
| premise |
hypothesis |
label |
A person on a horse jumps over a broken down airplane. |
A person is training his horse for a competition. |
1 |
A person on a horse jumps over a broken down airplane. |
A person is at a diner, ordering an omelette. |
2 |
A person on a horse jumps over a broken down airplane. |
A person is outdoors, on a horse. |
0 |
- Loss:
CrossEntropyLoss
Evaluation Dataset
all-nli
- Dataset: all-nli at d482672
- Size: 1,000 evaluation samples
- Columns:
premise, hypothesis, and label
- Approximate statistics based on the first 1000 samples:
|
premise |
hypothesis |
label |
| type |
string |
string |
int |
| details |
- min: 16 characters
- mean: 75.01 characters
- max: 229 characters
|
- min: 11 characters
- mean: 37.66 characters
- max: 116 characters
|
- 0: ~33.10%
- 1: ~33.30%
- 2: ~33.60%
|
- Samples:
| premise |
hypothesis |
label |
Two women are embracing while holding to go packages. |
The sisters are hugging goodbye while holding to go packages after just eating lunch. |
1 |
Two women are embracing while holding to go packages. |
Two woman are holding packages. |
0 |
Two women are embracing while holding to go packages. |
The men are fighting outside a deli. |
2 |
- Loss:
CrossEntropyLoss
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: steps
per_device_train_batch_size: 64
per_device_eval_batch_size: 64
num_train_epochs: 1
warmup_ratio: 0.1
bf16: True
All Hyperparameters
Click to expand
overwrite_output_dir: False
do_predict: False
eval_strategy: steps
prediction_loss_only: True
per_device_train_batch_size: 64
per_device_eval_batch_size: 64
per_gpu_train_batch_size: None
per_gpu_eval_batch_size: None
gradient_accumulation_steps: 1
eval_accumulation_steps: None
torch_empty_cache_steps: None
learning_rate: 5e-05
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 1
max_steps: -1
lr_scheduler_type: linear
lr_scheduler_kwargs: {}
warmup_ratio: 0.1
warmup_steps: 0
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
save_safetensors: True
save_on_each_node: False
save_only_model: False
restore_callback_states_from_checkpoint: False
no_cuda: False
use_cpu: False
use_mps_device: False
seed: 42
data_seed: None
jit_mode_eval: False
use_ipex: False
bf16: True
fp16: False
fp16_opt_level: O1
half_precision_backend: auto
bf16_full_eval: False
fp16_full_eval: False
tf32: None
local_rank: 0
ddp_backend: None
tpu_num_cores: None
tpu_metrics_debug: False
debug: []
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_prefetch_factor: None
past_index: -1
disable_tqdm: False
remove_unused_columns: True
label_names: None
load_best_model_at_end: False
ignore_data_skip: False
fsdp: []
fsdp_min_num_params: 0
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
tp_size: 0
fsdp_transformer_layer_cls_to_wrap: None
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch
optim_args: None
adafactor: False
group_by_length: False
length_column_name: length
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
use_legacy_prediction_loop: False
push_to_hub: False
resume_from_checkpoint: None
hub_model_id: None
hub_strategy: every_save
hub_private_repo: None
hub_always_push: False
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_inputs_for_metrics: False
include_for_metrics: []
eval_do_concat_batches: True
fp16_backend: auto
push_to_hub_model_id: None
push_to_hub_organization: None
mp_parameters:
auto_find_batch_size: False
full_determinism: False
torchdynamo: None
ray_scope: last
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
dispatch_batches: None
split_batches: None
include_tokens_per_second: False
include_num_input_tokens_seen: False
neftune_noise_alpha: None
optim_target_modules: None
batch_eval_metrics: False
eval_on_start: False
use_liger_kernel: False
eval_use_gather_object: False
average_tokens_across_devices: False
prompts: None
batch_sampler: batch_sampler
multi_dataset_batch_sampler: proportional
Training Logs
| Epoch |
Step |
Training Loss |
Validation Loss |
AllNLI-dev_f1_macro |
AllNLI-test_f1_macro |
| -1 |
-1 |
- |
- |
0.1665 |
- |
| 0.0640 |
100 |
1.0595 |
- |
- |
- |
| 0.1280 |
200 |
0.7 |
- |
- |
- |
| 0.1919 |
300 |
0.6039 |
- |
- |
- |
| 0.2559 |
400 |
0.5821 |
- |
- |
- |
| 0.3199 |
500 |
0.5521 |
0.4509 |
0.8186 |
- |
| 0.3839 |
600 |
0.5148 |
- |
- |
- |
| 0.4479 |
700 |
0.5334 |
- |
- |
- |
| 0.5118 |
800 |
0.5125 |
- |
- |
- |
| 0.5758 |
900 |
0.4893 |
- |
- |
- |
| 0.6398 |
1000 |
0.503 |
0.3864 |
0.8554 |
- |
| 0.7038 |
1100 |
0.4706 |
- |
- |
- |
| 0.7678 |
1200 |
0.4635 |
- |
- |
- |
| 0.8317 |
1300 |
0.44 |
- |
- |
- |
| 0.8957 |
1400 |
0.459 |
- |
- |
- |
| 0.9597 |
1500 |
0.4481 |
0.3537 |
0.8472 |
- |
| -1 |
-1 |
- |
- |
- |
0.7673 |
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 4.0.1
- Transformers: 4.50.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.5.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}