SentenceTransformer based on google/electra-large-discriminator
This is a sentence-transformers model finetuned from google/electra-large-discriminator on the PiC/phrase_similarity dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: google/electra-large-discriminator
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 tokens
- Similarity Function: Cosine Similarity
- Training Dataset:
- Language: en
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: ElectraModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Deehan1866/Electra")
sentences = [
"She wants to write about Keima but suffers a major case of writer's block.",
"She wants to write about Keima but suffers a huge occurrence of writer's block.",
'specific medical status of movement and the general condition of movement both are conditions under which contradictions can move.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
Evaluation
Metrics
Binary Classification
| Metric |
Value |
| cosine_accuracy |
0.748 |
| cosine_accuracy_threshold |
0.9737 |
| cosine_f1 |
0.7605 |
| cosine_f1_threshold |
0.9575 |
| cosine_precision |
0.712 |
| cosine_recall |
0.816 |
| cosine_ap |
0.7869 |
| dot_accuracy |
0.667 |
| dot_accuracy_threshold |
275.4552 |
| dot_f1 |
0.7332 |
| dot_f1_threshold |
266.1473 |
| dot_precision |
0.601 |
| dot_recall |
0.94 |
| dot_ap |
0.5935 |
| manhattan_accuracy |
0.746 |
| manhattan_accuracy_threshold |
87.7386 |
| manhattan_f1 |
0.7615 |
| manhattan_f1_threshold |
131.4337 |
| manhattan_precision |
0.7034 |
| manhattan_recall |
0.83 |
| manhattan_ap |
0.7905 |
| euclidean_accuracy |
0.747 |
| euclidean_accuracy_threshold |
4.5834 |
| euclidean_f1 |
0.761 |
| euclidean_f1_threshold |
5.554 |
| euclidean_precision |
0.716 |
| euclidean_recall |
0.812 |
| euclidean_ap |
0.7898 |
| max_accuracy |
0.748 |
| max_accuracy_threshold |
275.4552 |
| max_f1 |
0.7615 |
| max_f1_threshold |
266.1473 |
| max_precision |
0.716 |
| max_recall |
0.94 |
| max_ap |
0.7905 |
Training Details
Training Dataset
PiC/phrase_similarity
- Dataset: PiC/phrase_similarity at fc67ce7
- Size: 7,004 training samples
- Columns:
sentence1, sentence2, and label
- Approximate statistics based on the first 1000 samples:
|
sentence1 |
sentence2 |
label |
| type |
string |
string |
int |
| details |
- min: 12 tokens
- mean: 26.35 tokens
- max: 57 tokens
|
- min: 12 tokens
- mean: 26.89 tokens
- max: 58 tokens
|
|
- Samples:
| sentence1 |
sentence2 |
label |
newly formed camp is released from the membrane and diffuses across the intracellular space where it serves to activate pka. |
recently made encampment is released from the membrane and diffuses across the intracellular space where it serves to activate pka. |
0 |
According to one data, in 1910, on others – in 1915, the mansion became Natalya Dmitriyevna Shchuchkina's property. |
According to a particular statistic, in 1910, on others – in 1915, the mansion became Natalya Dmitriyevna Shchuchkina's property. |
1 |
Note that Fact 1 does not assume any particular structure on the set formula_65. |
Note that Fact 1 does not assume any specific edifice on the set formula_65. |
0 |
- Loss:
SoftmaxLoss
Evaluation Dataset
PiC/phrase_similarity
- Dataset: PiC/phrase_similarity at fc67ce7
- Size: 1,000 evaluation samples
- Columns:
sentence1, sentence2, and label
- Approximate statistics based on the first 1000 samples:
|
sentence1 |
sentence2 |
label |
| type |
string |
string |
int |
| details |
- min: 9 tokens
- mean: 26.21 tokens
- max: 61 tokens
|
- min: 10 tokens
- mean: 26.8 tokens
- max: 61 tokens
|
|
- Samples:
| sentence1 |
sentence2 |
label |
after theo's apparent death, she decides to leave first colony and ends up traveling with the apostles. |
after theo's apparent death, she decides to leave original settlement and ends up traveling with the apostles. |
0 |
The guard assigned to Vivian leaves her to prevent the robbery, allowing her to connect to the bank's network. |
The guard assigned to Vivian leaves her to prevent the robbery, allowing her to connect to the bank's locations. |
0 |
Two days later Louis XVI banished Necker by a "lettre de cachet" for his very public exchange of pamphlets. |
Two days later Louis XVI banished Necker by a "lettre de cachet" for his very free forum of pamphlets. |
0 |
- Loss:
SoftmaxLoss
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: steps
per_device_train_batch_size: 16
per_device_eval_batch_size: 16
learning_rate: 2e-05
num_train_epochs: 5
warmup_ratio: 0.1
load_best_model_at_end: True
All Hyperparameters
Click to expand
overwrite_output_dir: False
do_predict: False
eval_strategy: steps
prediction_loss_only: True
per_device_train_batch_size: 16
per_device_eval_batch_size: 16
per_gpu_train_batch_size: None
per_gpu_eval_batch_size: None
gradient_accumulation_steps: 1
eval_accumulation_steps: None
learning_rate: 2e-05
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 5
max_steps: -1
lr_scheduler_type: linear
lr_scheduler_kwargs: {}
warmup_ratio: 0.1
warmup_steps: 0
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
save_safetensors: True
save_on_each_node: False
save_only_model: False
restore_callback_states_from_checkpoint: False
no_cuda: False
use_cpu: False
use_mps_device: False
seed: 42
data_seed: None
jit_mode_eval: False
use_ipex: False
bf16: False
fp16: False
fp16_opt_level: O1
half_precision_backend: auto
bf16_full_eval: False
fp16_full_eval: False
tf32: None
local_rank: 0
ddp_backend: None
tpu_num_cores: None
tpu_metrics_debug: False
debug: []
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_prefetch_factor: None
past_index: -1
disable_tqdm: False
remove_unused_columns: True
label_names: None
load_best_model_at_end: True
ignore_data_skip: False
fsdp: []
fsdp_min_num_params: 0
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap: None
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch
optim_args: None
adafactor: False
group_by_length: False
length_column_name: length
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
use_legacy_prediction_loop: False
push_to_hub: False
resume_from_checkpoint: None
hub_model_id: None
hub_strategy: every_save
hub_private_repo: False
hub_always_push: False
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_inputs_for_metrics: False
eval_do_concat_batches: True
fp16_backend: auto
push_to_hub_model_id: None
push_to_hub_organization: None
mp_parameters:
auto_find_batch_size: False
full_determinism: False
torchdynamo: None
ray_scope: last
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
dispatch_batches: None
split_batches: None
include_tokens_per_second: False
include_num_input_tokens_seen: False
neftune_noise_alpha: None
optim_target_modules: None
batch_eval_metrics: False
eval_on_start: False
batch_sampler: batch_sampler
multi_dataset_batch_sampler: proportional
Training Logs
| Epoch |
Step |
Training Loss |
loss |
quora-duplicates-dev_max_ap |
| 0 |
0 |
- |
- |
0.6721 |
| 0.2283 |
100 |
- |
0.6805 |
0.6847 |
| 0.4566 |
200 |
- |
0.5313 |
0.7905 |
| 0.6849 |
300 |
- |
0.5383 |
0.7838 |
| 0.9132 |
400 |
- |
0.6442 |
0.7585 |
| 1.1416 |
500 |
0.5761 |
0.5742 |
0.7843 |
| 1.3699 |
600 |
- |
0.5606 |
0.7558 |
| 1.5982 |
700 |
- |
0.5716 |
0.7772 |
| 1.8265 |
800 |
- |
0.5573 |
0.7619 |
| 2.0548 |
900 |
- |
0.6951 |
0.7760 |
| 2.2831 |
1000 |
0.3712 |
0.7678 |
0.7753 |
| 2.5114 |
1100 |
- |
0.7712 |
0.7915 |
| 2.7397 |
1200 |
- |
0.8120 |
0.7914 |
| 2.9680 |
1300 |
- |
0.8045 |
0.7789 |
| 3.1963 |
1400 |
- |
0.9936 |
0.7821 |
| 3.4247 |
1500 |
0.1942 |
1.0883 |
0.7679 |
| 3.6530 |
1600 |
- |
0.9814 |
0.7566 |
| 3.8813 |
1700 |
- |
1.0897 |
0.7830 |
| 4.1096 |
1800 |
- |
1.0764 |
0.7729 |
| 4.3379 |
1900 |
- |
1.1209 |
0.7802 |
| 4.5662 |
2000 |
0.1175 |
1.1522 |
0.7804 |
| 4.7945 |
2100 |
- |
1.1545 |
0.7807 |
| 5.0 |
2190 |
- |
- |
0.7905 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.10
- Sentence Transformers: 3.0.1
- Transformers: 4.42.3
- PyTorch: 2.2.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers and SoftmaxLoss
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}