WordSenseTransformer based on BAAI/bge-base-en-v1.5
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
Full Model Architecture
WordSenseTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): WordPooling()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("wordnet-sense-bge-small")
sentences = [
'mean [SEP] My ex-husband means nothing to me',
'mean [SEP] have a specified degree of importance',
'opalesce [SEP] reflect light or colors like an opal',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities)
Evaluation
Metrics
Information Retrieval
| Metric |
Value |
| cosine_accuracy@1 |
0.966 |
| cosine_accuracy@5 |
0.999 |
| cosine_accuracy@10 |
1.0 |
| cosine_precision@1 |
0.966 |
| cosine_precision@5 |
0.1998 |
| cosine_precision@10 |
0.1 |
| cosine_recall@1 |
0.966 |
| cosine_recall@5 |
0.999 |
| cosine_recall@10 |
1.0 |
| cosine_ndcg@1 |
0.966 |
| cosine_ndcg@5 |
0.986 |
| cosine_ndcg@10 |
0.9863 |
| cosine_mrr@1 |
0.966 |
| cosine_mrr@5 |
0.9815 |
| cosine_mrr@10 |
0.9816 |
| cosine_map@100 |
0.9816 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 41,784 training samples
- Columns:
anchor, positive, negative_0, negative_1, negative_2, and negative_3
- Approximate statistics based on the first 1000 samples:
|
anchor |
positive |
negative_0 |
negative_1 |
negative_2 |
negative_3 |
| type |
string |
string |
string |
string |
string |
string |
| details |
- min: 19 characters
- mean: 46.08 characters
- max: 416 characters
|
- min: 18 characters
- mean: 60.28 characters
- max: 375 characters
|
- min: 0 characters
- mean: 50.46 characters
- max: 155 characters
|
- min: 0 characters
- mean: 49.93 characters
- max: 253 characters
|
- min: 0 characters
- mean: 48.94 characters
- max: 253 characters
|
- min: 0 characters
- mean: 42.58 characters
- max: 126 characters
|
- Samples:
| anchor |
positive |
negative_0 |
negative_1 |
negative_2 |
negative_3 |
avenged [SEP] an avenged injury |
avenged [SEP] for which vengeance has been taken |
|
|
|
|
unavenged [SEP] an unavenged murder |
unavenged [SEP] for which vengeance has not been taken |
|
|
|
|
beaten [SEP] beaten gold |
beaten [SEP] formed or made thin by hammering |
beaten [SEP] much trodden and worn smooth or bare |
|
|
|
- Loss:
main.InterWordNegativeLoss
Evaluation Dataset
Unnamed Dataset
- Size: 1,000 evaluation samples
- Columns:
anchor and positive
- Approximate statistics based on the first 1000 samples:
|
anchor |
positive |
| type |
string |
string |
| details |
- min: 20 characters
- mean: 52.46 characters
- max: 416 characters
|
- min: 16 characters
- mean: 61.81 characters
- max: 244 characters
|
- Samples:
| anchor |
positive |
light [SEP] a light lilting voice like a silver bell |
light [SEP] (of sound or color) free from anything that dulls or dims |
maximize [SEP] He maximized his role |
maximize [SEP] make the most of |
coastwise [SEP] coastwise winds contributed to the storm |
coastwise [SEP] along or following a coast |
- Loss:
main.InterWordNegativeLoss
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: steps
per_device_train_batch_size: 64
per_device_eval_batch_size: 64
learning_rate: 2e-05
num_train_epochs: 20
warmup_ratio: 0.1
fp16: True
load_best_model_at_end: True
push_to_hub: True
hub_model_id: wordnet-sense-bge-small
hub_private_repo: True
batch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: False
do_predict: False
eval_strategy: steps
prediction_loss_only: True
per_device_train_batch_size: 64
per_device_eval_batch_size: 64
per_gpu_train_batch_size: None
per_gpu_eval_batch_size: None
gradient_accumulation_steps: 1
eval_accumulation_steps: None
torch_empty_cache_steps: None
learning_rate: 2e-05
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 20
max_steps: -1
lr_scheduler_type: linear
lr_scheduler_kwargs: {}
warmup_ratio: 0.1
warmup_steps: 0
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
save_safetensors: True
save_on_each_node: False
save_only_model: False
restore_callback_states_from_checkpoint: False
no_cuda: False
use_cpu: False
use_mps_device: False
seed: 42
data_seed: None
jit_mode_eval: False
bf16: False
fp16: True
fp16_opt_level: O1
half_precision_backend: auto
bf16_full_eval: False
fp16_full_eval: False
tf32: None
local_rank: 0
ddp_backend: None
tpu_num_cores: None
tpu_metrics_debug: False
debug: []
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_prefetch_factor: None
past_index: -1
disable_tqdm: False
remove_unused_columns: True
label_names: None
load_best_model_at_end: True
ignore_data_skip: False
fsdp: []
fsdp_min_num_params: 0
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap: None
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
parallelism_config: None
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch_fused
optim_args: None
adafactor: False
group_by_length: False
length_column_name: length
project: huggingface
trackio_space_id: trackio
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
use_legacy_prediction_loop: False
push_to_hub: True
resume_from_checkpoint: None
hub_model_id: wordnet-sense-bge-small
hub_strategy: every_save
hub_private_repo: True
hub_always_push: False
hub_revision: None
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_inputs_for_metrics: False
include_for_metrics: []
eval_do_concat_batches: True
fp16_backend: auto
push_to_hub_model_id: None
push_to_hub_organization: None
mp_parameters:
auto_find_batch_size: False
full_determinism: False
torchdynamo: None
ray_scope: last
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
include_tokens_per_second: False
include_num_input_tokens_seen: no
neftune_noise_alpha: None
optim_target_modules: None
batch_eval_metrics: False
eval_on_start: False
use_liger_kernel: False
liger_kernel_config: None
eval_use_gather_object: False
average_tokens_across_devices: True
prompts: None
batch_sampler: no_duplicates
multi_dataset_batch_sampler: proportional
router_mapping: {}
learning_rate_mapping: {}
Training Logs
| Epoch |
Step |
Training Loss |
Validation Loss |
wordnet-validation_cosine_ndcg@10 |
| 0 |
0 |
- |
- |
0.9829 |
| 0.1529 |
50 |
1.6901 |
- |
- |
| 0.3058 |
100 |
1.5425 |
- |
- |
| 0.4587 |
150 |
0.6709 |
- |
- |
| 0.6116 |
200 |
0.536 |
- |
- |
| 0.7645 |
250 |
0.3458 |
0.1146 |
0.9891 |
| 0.9174 |
300 |
0.5862 |
- |
- |
| 1.0703 |
350 |
0.9087 |
- |
- |
| 1.2232 |
400 |
1.2256 |
- |
- |
| 1.3761 |
450 |
0.9617 |
- |
- |
| 1.5291 |
500 |
0.4358 |
0.0562 |
0.9862 |
| 1.6820 |
550 |
0.3726 |
- |
- |
| 1.8349 |
600 |
0.5553 |
- |
- |
| 1.9878 |
650 |
0.3993 |
- |
- |
| 2.1407 |
700 |
1.0044 |
- |
- |
| 2.2936 |
750 |
0.9938 |
0.0310 |
0.9881 |
| 2.4465 |
800 |
0.6444 |
- |
- |
| 2.5994 |
850 |
0.3577 |
- |
- |
| 2.7523 |
900 |
0.4088 |
- |
- |
| 2.9052 |
950 |
0.4236 |
- |
- |
| 3.0581 |
1000 |
0.5856 |
0.0339 |
0.9863 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.17
- Sentence Transformers: 5.1.2
- Transformers: 4.57.3
- PyTorch: 2.9.1+cu128
- Accelerate: 1.12.0
- Datasets: 4.4.1
- Tokenizers: 0.22.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}