SentenceTransformer based on microsoft/deberta-v3-small
This is a sentence-transformers model finetuned from microsoft/deberta-v3-small on the stanfordnlp/snli dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: microsoft/deberta-v3-small
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
- Training Dataset:
- Language: en
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DebertaV2Model
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("bobox/DeBERTaV3-small-ST-AdaptiveLayer-3L-ep2")
sentences = [
'These girls are having a great time looking for seashells.',
'The girls are happy.',
'A girl is standing outside.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
Evaluation
Metrics
Binary Classification
| Metric |
Value |
| cosine_accuracy |
0.6653 |
| cosine_accuracy_threshold |
0.6692 |
| cosine_f1 |
0.7051 |
| cosine_f1_threshold |
0.5758 |
| cosine_precision |
0.5903 |
| cosine_recall |
0.8753 |
| cosine_ap |
0.7024 |
| dot_accuracy |
0.6308 |
| dot_accuracy_threshold |
127.0527 |
| dot_f1 |
0.6984 |
| dot_f1_threshold |
101.7725 |
| dot_precision |
0.5773 |
| dot_recall |
0.8838 |
| dot_ap |
0.6558 |
| manhattan_accuracy |
0.6675 |
| manhattan_accuracy_threshold |
210.9939 |
| manhattan_f1 |
0.7108 |
| manhattan_f1_threshold |
252.6531 |
| manhattan_precision |
0.6061 |
| manhattan_recall |
0.8592 |
| manhattan_ap |
0.7094 |
| euclidean_accuracy |
0.6619 |
| euclidean_accuracy_threshold |
11.2276 |
| euclidean_f1 |
0.7073 |
| euclidean_f1_threshold |
12.8508 |
| euclidean_precision |
0.5879 |
| euclidean_recall |
0.8876 |
| euclidean_ap |
0.7038 |
| max_accuracy |
0.6675 |
| max_accuracy_threshold |
210.9939 |
| max_f1 |
0.7108 |
| max_f1_threshold |
252.6531 |
| max_precision |
0.6061 |
| max_recall |
0.8876 |
| max_ap |
0.7094 |
Training Details
Training Dataset
stanfordnlp/snli
- Dataset: stanfordnlp/snli at cdb5c3d
- Size: 67,190 training samples
- Columns:
sentence1, sentence2, and label
- Approximate statistics based on the first 1000 samples:
|
sentence1 |
sentence2 |
label |
| type |
string |
string |
int |
| details |
- min: 4 tokens
- mean: 21.19 tokens
- max: 133 tokens
|
- min: 4 tokens
- mean: 11.77 tokens
- max: 49 tokens
|
|
- Samples:
| sentence1 |
sentence2 |
label |
Without a placebo group, we still won't know if any of the treatments are better than nothing and therefore worth giving. |
It is necessary to use a controlled method to ensure the treatments are worthwhile. |
0 |
It was conducted in silence. |
It was done silently. |
0 |
oh Lewisville any decent food in your cafeteria up there |
Is there any decent food in your cafeteria up there in Lewisville? |
0 |
- Loss:
AdaptiveLayerLoss with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"n_layers_per_step": 3,
"last_layer_weight": 1,
"prior_layers_weight": 0.3,
"kl_div_weight": 1,
"kl_temperature": 1
}
Evaluation Dataset
stanfordnlp/snli
- Dataset: stanfordnlp/snli at cdb5c3d
- Size: 6,626 evaluation samples
- Columns:
premise, hypothesis, and label
- Approximate statistics based on the first 1000 samples:
|
premise |
hypothesis |
label |
| type |
string |
string |
int |
| details |
- min: 6 tokens
- mean: 17.28 tokens
- max: 59 tokens
|
- min: 4 tokens
- mean: 10.53 tokens
- max: 32 tokens
|
|
- Samples:
| premise |
hypothesis |
label |
This church choir sings to the masses as they sing joyous songs from the book at a church. |
The church has cracks in the ceiling. |
0 |
This church choir sings to the masses as they sing joyous songs from the book at a church. |
The church is filled with song. |
1 |
A woman with a green headscarf, blue shirt and a very big grin. |
The woman is young. |
0 |
- Loss:
AdaptiveLayerLoss with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"n_layers_per_step": 3,
"last_layer_weight": 1,
"prior_layers_weight": 0.3,
"kl_div_weight": 1,
"kl_temperature": 1
}
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: steps
per_device_train_batch_size: 45
per_device_eval_batch_size: 22
learning_rate: 3e-06
weight_decay: 1e-09
num_train_epochs: 2
lr_scheduler_type: cosine
warmup_ratio: 0.5
save_safetensors: False
fp16: True
push_to_hub: True
hub_model_id: bobox/DeBERTaV3-small-ST-AdaptiveLayer-3L-ep2-n
hub_strategy: checkpoint
batch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: False
do_predict: False
eval_strategy: steps
prediction_loss_only: True
per_device_train_batch_size: 45
per_device_eval_batch_size: 22
per_gpu_train_batch_size: None
per_gpu_eval_batch_size: None
gradient_accumulation_steps: 1
eval_accumulation_steps: None
learning_rate: 3e-06
weight_decay: 1e-09
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 2
max_steps: -1
lr_scheduler_type: cosine
lr_scheduler_kwargs: {}
warmup_ratio: 0.5
warmup_steps: 0
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
save_safetensors: False
save_on_each_node: False
save_only_model: False
restore_callback_states_from_checkpoint: False
no_cuda: False
use_cpu: False
use_mps_device: False
seed: 42
data_seed: None
jit_mode_eval: False
use_ipex: False
bf16: False
fp16: True
fp16_opt_level: O1
half_precision_backend: auto
bf16_full_eval: False
fp16_full_eval: False
tf32: None
local_rank: 0
ddp_backend: None
tpu_num_cores: None
tpu_metrics_debug: False
debug: []
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_prefetch_factor: None
past_index: -1
disable_tqdm: False
remove_unused_columns: True
label_names: None
load_best_model_at_end: False
ignore_data_skip: False
fsdp: []
fsdp_min_num_params: 0
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap: None
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch
optim_args: None
adafactor: False
group_by_length: False
length_column_name: length
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
use_legacy_prediction_loop: False
push_to_hub: True
resume_from_checkpoint: None
hub_model_id: bobox/DeBERTaV3-small-ST-AdaptiveLayer-3L-ep2-n
hub_strategy: checkpoint
hub_private_repo: False
hub_always_push: False
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_inputs_for_metrics: False
eval_do_concat_batches: True
fp16_backend: auto
push_to_hub_model_id: None
push_to_hub_organization: None
mp_parameters:
auto_find_batch_size: False
full_determinism: False
torchdynamo: None
ray_scope: last
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
dispatch_batches: None
split_batches: None
include_tokens_per_second: False
include_num_input_tokens_seen: False
neftune_noise_alpha: None
optim_target_modules: None
batch_eval_metrics: False
batch_sampler: no_duplicates
multi_dataset_batch_sampler: proportional
Training Logs
| Epoch |
Step |
Training Loss |
loss |
max_ap |
| 0.1004 |
150 |
4.9809 |
- |
- |
| 0.2001 |
299 |
- |
3.8956 |
0.6130 |
| 0.2008 |
300 |
3.8459 |
- |
- |
| 0.3012 |
450 |
3.1941 |
- |
- |
| 0.4003 |
598 |
- |
3.2066 |
0.6526 |
| 0.4016 |
600 |
2.7939 |
- |
- |
| 0.5020 |
750 |
2.3082 |
- |
- |
| 0.6004 |
897 |
- |
2.4595 |
0.6884 |
| 0.6024 |
900 |
1.9658 |
- |
- |
| 0.7028 |
1050 |
1.6975 |
- |
- |
| 0.8005 |
1196 |
- |
2.0292 |
0.7010 |
| 0.8032 |
1200 |
1.528 |
- |
- |
| 0.9036 |
1350 |
1.3763 |
- |
- |
| 1.0007 |
1495 |
- |
1.8192 |
0.7071 |
| 1.0040 |
1500 |
1.262 |
- |
- |
| 1.1044 |
1650 |
1.2033 |
- |
- |
| 1.2008 |
1794 |
- |
1.6673 |
0.7082 |
| 1.2048 |
1800 |
1.1221 |
- |
- |
| 1.3052 |
1950 |
1.0963 |
- |
- |
| 1.4009 |
2093 |
- |
1.5816 |
0.7103 |
| 1.4056 |
2100 |
1.0742 |
- |
- |
| 1.5060 |
2250 |
1.0242 |
- |
- |
| 1.6011 |
2392 |
- |
1.5368 |
0.7094 |
| 1.6064 |
2400 |
1.0036 |
- |
- |
| 1.7068 |
2550 |
1.0143 |
- |
- |
| 1.8012 |
2691 |
- |
1.5158 |
0.7094 |
| 1.8072 |
2700 |
0.9799 |
- |
- |
| 1.9076 |
2850 |
0.9777 |
- |
- |
Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2
- Accelerate: 0.30.1
- Datasets: 2.19.2
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
AdaptiveLayerLoss
@misc{li20242d,
title={2D Matryoshka Sentence Embeddings},
author={Xianming Li and Zongxi Li and Jing Li and Haoran Xie and Qing Li},
year={2024},
eprint={2402.14776},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}