SentenceTransformer
This is a sentence-transformers model trained on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Maximum Sequence Length: 8192 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("LequeuISIR/final-DPR-8e-05")
sentences = [
'This incites social hatred, threatens economic and social stability, and undermines trust in the authorities.',
'\xa0The conditions for a healthy entrepreneurship, where the most innovative and creative win and where the source of enrichment cannot be property speculation or guilds and networks. ',
'As a result, the profits of the oligarchs are more than 400 times what our entire country gets from the exploitation of natural resources.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
Training Details
Training Dataset
json
Evaluation Dataset
json
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: steps
per_device_train_batch_size: 64
per_device_eval_batch_size: 64
learning_rate: 8e-05
num_train_epochs: 5
warmup_ratio: 0.05
bf16: True
batch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: False
do_predict: False
eval_strategy: steps
prediction_loss_only: True
per_device_train_batch_size: 64
per_device_eval_batch_size: 64
per_gpu_train_batch_size: None
per_gpu_eval_batch_size: None
gradient_accumulation_steps: 1
eval_accumulation_steps: None
torch_empty_cache_steps: None
learning_rate: 8e-05
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 5
max_steps: -1
lr_scheduler_type: linear
lr_scheduler_kwargs: {}
warmup_ratio: 0.05
warmup_steps: 0
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
save_safetensors: True
save_on_each_node: False
save_only_model: False
restore_callback_states_from_checkpoint: False
no_cuda: False
use_cpu: False
use_mps_device: False
seed: 42
data_seed: None
jit_mode_eval: False
use_ipex: False
bf16: True
fp16: False
fp16_opt_level: O1
half_precision_backend: auto
bf16_full_eval: False
fp16_full_eval: False
tf32: None
local_rank: 0
ddp_backend: None
tpu_num_cores: None
tpu_metrics_debug: False
debug: []
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_prefetch_factor: None
past_index: -1
disable_tqdm: False
remove_unused_columns: True
label_names: None
load_best_model_at_end: False
ignore_data_skip: False
fsdp: []
fsdp_min_num_params: 0
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap: None
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch
optim_args: None
adafactor: False
group_by_length: False
length_column_name: length
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
use_legacy_prediction_loop: False
push_to_hub: False
resume_from_checkpoint: None
hub_model_id: None
hub_strategy: every_save
hub_private_repo: None
hub_always_push: False
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_inputs_for_metrics: False
include_for_metrics: []
eval_do_concat_batches: True
fp16_backend: auto
push_to_hub_model_id: None
push_to_hub_organization: None
mp_parameters:
auto_find_batch_size: False
full_determinism: False
torchdynamo: None
ray_scope: last
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
dispatch_batches: None
split_batches: None
include_tokens_per_second: False
include_num_input_tokens_seen: False
neftune_noise_alpha: None
optim_target_modules: None
batch_eval_metrics: False
eval_on_start: False
use_liger_kernel: False
eval_use_gather_object: False
average_tokens_across_devices: False
prompts: None
batch_sampler: no_duplicates
multi_dataset_batch_sampler: proportional
Training Logs
| Epoch |
Step |
Training Loss |
Validation Loss |
| 0.0837 |
500 |
0.7889 |
9.5828 |
| 0.1673 |
1000 |
1.2158 |
9.3274 |
| 0.2510 |
1500 |
1.8215 |
9.4274 |
| 0.3346 |
2000 |
2.3548 |
8.2583 |
| 0.4183 |
2500 |
2.7493 |
8.1446 |
| 0.5019 |
3000 |
2.8998 |
7.9046 |
| 0.5856 |
3500 |
2.9298 |
8.0640 |
| 0.6692 |
4000 |
2.9053 |
7.2746 |
| 0.7529 |
4500 |
3.0905 |
7.5099 |
| 0.8365 |
5000 |
3.1864 |
7.3883 |
| 0.9202 |
5500 |
3.2322 |
6.9968 |
| 1.0038 |
6000 |
3.1194 |
7.4682 |
| 1.0875 |
6500 |
3.0122 |
7.7295 |
| 1.1712 |
7000 |
3.0453 |
7.1696 |
| 1.2548 |
7500 |
2.9439 |
7.2775 |
| 1.3385 |
8000 |
3.1108 |
7.4838 |
| 1.4221 |
8500 |
2.8512 |
7.5204 |
| 1.5058 |
9000 |
2.9865 |
7.4528 |
| 1.5894 |
9500 |
2.9995 |
8.0682 |
| 1.6731 |
10000 |
3.1073 |
7.5344 |
| 1.7567 |
10500 |
3.0631 |
7.4572 |
| 1.8404 |
11000 |
2.9915 |
7.4961 |
| 1.9240 |
11500 |
3.0445 |
7.3575 |
| 2.0077 |
12000 |
2.9501 |
7.9786 |
| 2.0914 |
12500 |
2.3377 |
8.6208 |
| 2.1750 |
13000 |
2.2833 |
8.8356 |
| 2.2587 |
13500 |
2.2785 |
8.8709 |
| 2.3423 |
14000 |
2.3012 |
8.6250 |
| 2.4260 |
14500 |
2.3488 |
8.1099 |
| 2.5096 |
15000 |
2.095 |
9.2305 |
| 2.5933 |
15500 |
2.4123 |
8.6405 |
| 2.6769 |
16000 |
2.2236 |
8.7805 |
| 2.7606 |
16500 |
2.3367 |
8.7110 |
| 2.8442 |
17000 |
2.1159 |
8.6447 |
| 2.9279 |
17500 |
2.1622 |
8.7123 |
| 3.0115 |
18000 |
2.1916 |
9.0314 |
| 3.0952 |
18500 |
1.604 |
9.3373 |
| 3.1789 |
19000 |
1.4116 |
9.6509 |
| 3.2625 |
19500 |
1.4036 |
9.9127 |
| 3.3462 |
20000 |
1.5392 |
9.8093 |
| 3.4298 |
20500 |
1.5791 |
9.8325 |
| 3.5135 |
21000 |
1.5343 |
9.7822 |
| 3.5971 |
21500 |
1.3913 |
9.6243 |
| 3.6808 |
22000 |
1.5151 |
9.9644 |
| 3.7644 |
22500 |
1.3922 |
9.7816 |
| 3.8481 |
23000 |
1.3361 |
9.5338 |
| 3.9317 |
23500 |
1.3363 |
9.8282 |
| 4.0154 |
24000 |
1.2234 |
10.2117 |
| 4.0990 |
24500 |
0.5927 |
10.4107 |
| 4.1827 |
25000 |
0.6879 |
10.4405 |
| 4.2664 |
25500 |
0.6832 |
10.5138 |
| 4.3500 |
26000 |
0.6514 |
10.2798 |
| 4.4337 |
26500 |
0.7396 |
10.3250 |
| 4.5173 |
27000 |
0.6813 |
10.4115 |
| 4.6010 |
27500 |
0.765 |
10.1365 |
| 4.6846 |
28000 |
0.5915 |
10.2402 |
| 4.7683 |
28500 |
0.5028 |
10.3197 |
| 4.8519 |
29000 |
0.5306 |
10.3270 |
| 4.9356 |
29500 |
0.5886 |
10.3543 |
Framework Versions
- Python: 3.9.21
- Sentence Transformers: 3.4.0
- Transformers: 4.48.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
CoSENTLoss
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}