|
|
--- |
|
|
tags: |
|
|
- sentence-transformers |
|
|
- sentence-similarity |
|
|
- feature-extraction |
|
|
base_model: insuperabile/rumodernbert-solyanka-QP |
|
|
datasets: |
|
|
- insuperabile/processed_ru_hnp |
|
|
pipeline_tag: sentence-similarity |
|
|
library_name: sentence-transformers |
|
|
--- |
|
|
|
|
|
# SentenceTransformer based on insuperabile/rumodernbert-solyanka-QP |
|
|
|
|
|
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [insuperabile/rumodernbert-solyanka-QP](https://huggingface.co/insuperabile/rumodernbert-solyanka-QP) on the [processed_ru_hnp](https://huggingface.co/datasets/insuperabile/processed_ru_hnp) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
- **Model Type:** Sentence Transformer |
|
|
- **Base model:** [insuperabile/rumodernbert-solyanka-QP](https://huggingface.co/insuperabile/rumodernbert-solyanka-QP) <!-- at revision 8a70982db6ef5414dc4b8fa750d4edc642635c8c --> |
|
|
- **Maximum Sequence Length:** 512 tokens |
|
|
- **Output Dimensionality:** 768 dimensions |
|
|
- **Similarity Function:** Cosine Similarity |
|
|
- **Training Dataset:** |
|
|
- [processed_ru_hnp](https://huggingface.co/datasets/insuperabile/processed_ru_hnp) |
|
|
<!-- - **Language:** Unknown --> |
|
|
<!-- - **License:** Unknown --> |
|
|
|
|
|
|
|
|
## Usage |
|
|
|
|
|
### Direct Usage (Sentence Transformers) |
|
|
|
|
|
First install the Sentence Transformers library: |
|
|
|
|
|
```bash |
|
|
pip install -U sentence-transformers |
|
|
``` |
|
|
|
|
|
Then you can load this model and run inference. |
|
|
```python |
|
|
from sentence_transformers import SentenceTransformer |
|
|
|
|
|
# Download from the 🤗 Hub |
|
|
model = SentenceTransformer("sentence_transformers_model_id") |
|
|
# Run inference |
|
|
sentences = [ |
|
|
'query: В Ижевске участились случаи телефонного мошенничества', |
|
|
'passage: В Ижевске участились случаи мошенничества с помощью рассылки СМС, либо звонков по телефону, передает пресс-служба ГУ МВД по Удмуртской Республике. В этих случаях злоумышленник сообщает: «Ваша банковская карта заблокирована» и что с нее «пытаются снять деньги».\nЧтобы избежать потери денежных средств, собеседник убеждает потерпевших сообщить ему информацию о своей карте: номер счета, пин-код, либо просит перевести деньги со своей карты на указанный им счет. Для убедительности злоумышленник может представиться «работником банка» или «сотрудником полиции», но сами правоохранители советуют не доверять незнакомцам.\nПолицейские рекомендуют гражданам не перезванивать по указанным в сообщениях номерам, не переходить по неизвестным ссылкам в интернете и не перечислять деньги по просьбам неизвестных лиц. Только это может стать гарантией сохранности денежных средств.', |
|
|
'passage: Суди по своим потребностям и образу жизни. По цене новой PS4 можно купить очень хороший горный велосипед, но ты можешь просто поднакопить и купить и то и то. Только велик придётся брать дешёвый.', |
|
|
] |
|
|
embeddings = model.encode(sentences) |
|
|
print(embeddings.shape) |
|
|
# [3, 768] |
|
|
|
|
|
# Get the similarity scores for the embeddings |
|
|
similarities = model.similarity(embeddings, embeddings) |
|
|
print(similarities.shape) |
|
|
# [3, 3] |
|
|
``` |
|
|
|
|
|
<!-- |
|
|
### Direct Usage (Transformers) |
|
|
|
|
|
<details><summary>Click to see the direct usage in Transformers</summary> |
|
|
|
|
|
</details> |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
### Downstream Usage (Sentence Transformers) |
|
|
|
|
|
You can finetune this model on your own dataset. |
|
|
|
|
|
<details><summary>Click to expand</summary> |
|
|
|
|
|
</details> |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
### Out-of-Scope Use |
|
|
|
|
|
*List how the model may foreseeably be misused and address what users ought not to do with the model.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
## Bias, Risks and Limitations |
|
|
|
|
|
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
### Recommendations |
|
|
|
|
|
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
|
|
--> |
|
|
|
|
|
## Encodechka |
|
|
| Model | Model Parameters | STS | PI | NLI | SA | TI | IC | ICX | NEI1 | NEI2 | AVG | |
|
|
|---------------------------------------|------------------|------|------|------|------|------|------|------|------|------|------| |
|
|
| insuperabile/rumodernbert-solyanka | 149M | 0.8 | 0.56 | 0.4 | 0.76 | 0.98 | 0.73 | 0.67 | 0.33 | 0.36 | 0.62 | |
|
|
| insuperabile/SimBERT_RU | 149M | 0.79 | 0.73 | 0.51 | 0.80 | 0.98 | 0.78 | 0.74 | 0.28 | 0.37 | 0.66 | |
|
|
| insuperabile/rumodernbert-solyanka-QP | 149M | 0.81 | 0.65 | 0.4 | 0.81 | 0.98 | 0.79 | 0.74 | 0.35 | 0.41 | 0.66 | |
|
|
| deepvk/USER-base | 124M | 0.85 | 0.74 | 0.48 | 0.81 | 0.99 | 0.8 | 0.7 | 0.29 | 0.41 | 0.68 | |
|
|
| paraphrase-multilingual-MiniLM-L12-v2 | 118M | 0.84 | 0.62 | 0.5 | 0.76 | 0.92 | 0.77 | 0.72 | - | - | - | |
|
|
| intfloat/multilingual-e5-small | 118M | 0.82 | 0.71 | 0.46 | 0.76 | 0.96 | 0.78 | 0.69 | 0.23 | 0.27 | 0.63 | |
|
|
|
|
|
|
|
|
|
|
|
## RuMTEB |
|
|
| model | avg | CEDRClass | GeoreviewClassification | GeoreviewClustering | HeadlineClassif | InappClassif | Kinopoisk | RiaRetrieval | RuBQReranking | RubqRetrieval | RuReviewsClass | RuSTSBench | RSBGClassif | RSBGCluster | RSBOClassif | RSBOCluster | SensitiveClassif | TERRa | |
|
|
|:--------------------------------------|--------:|------------:|--------------------------:|----------------------:|------------------:|---------------:|------------:|---------------:|----------------:|----------------:|-----------------:|-------------:|--------------:|--------------:|--------------:|--------------:|-------------------:|--------:| |
|
|
| rumodernbert-solyanka | 53.2006 | 38.34 | 33.79 | 66.68 | 79.36 | 60.71 | 44.78 | 50.67 | 63.57 | 53.58 | 51.05 | 80.07 | 52.31 | 51.17 | 41.01 | 45.21 | 41.39 | 50.72 | |
|
|
| SimBERT_RU | 50.5552 | 45.58 | 42.63 | 51.52 | 55.80 | 58.28 | 53.08 | 68.08 | 61.40 | 53.58 | 42.78 | 79.79 | 46.35 | 44.06 | 35.21 | 38.76 | 22.58 | 59.96 | |
|
|
| rumodernbert-solyanka-qp | 56.5847 | 39.44 | 37.72 | 71.23 | 73.85 | 59.97 | 50.37 | 73.09 | 68.07 | 62.65 | 56.59 | 81.64 | 56.04 | 53.40 | 44.48 | 46.80 | 32.82 | 53.78 | |
|
|
| user-base | 57.6429 | 46.78 | 46.88 | 63.41 | 75 | 61.83 | 56.03 | 77.72 | 64.42 | 56.86 | 65.48 | 81.91 | 55.55 | 51.5 | 43.28 | 44.87 | 28.65 | 59.76 | |
|
|
| paraphrase-multilingual-MiniLM-L12-v2 | 48.8794 | 37.76 | 38.24 | 53.37 | 68.3 | 58.18 | 41.45 | 44.82 | 52.8 | 29.7 | 58.88 | 79.55 | 53.19 | 48.22 | 41.41 | 41.68 | 24.84 | 58.56 | |
|
|
| multilingual-e5-small | 55.3024 | 40.39 | 42.3 | 61.56 | 73.74 | 58.44 | 47.57 | 70 | 71.46 | 68.53 | 60.64 | 77.72 | 53.59 | 49.34 | 40.35 | 42.62 | 24.38 | 57.51 | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#### All Hyperparameters |
|
|
<details><summary>Click to expand</summary> |
|
|
|
|
|
- `overwrite_output_dir`: False |
|
|
- `do_predict`: False |
|
|
- `eval_strategy`: no |
|
|
- `prediction_loss_only`: True |
|
|
- `per_device_train_batch_size`: 64 |
|
|
- `per_device_eval_batch_size`: 8 |
|
|
- `per_gpu_train_batch_size`: None |
|
|
- `per_gpu_eval_batch_size`: None |
|
|
- `gradient_accumulation_steps`: 1 |
|
|
- `eval_accumulation_steps`: None |
|
|
- `torch_empty_cache_steps`: None |
|
|
- `learning_rate`: 2e-06 |
|
|
- `weight_decay`: 0.0 |
|
|
- `adam_beta1`: 0.9 |
|
|
- `adam_beta2`: 0.999 |
|
|
- `adam_epsilon`: 1e-08 |
|
|
- `max_grad_norm`: 1.0 |
|
|
- `num_train_epochs`: 1 |
|
|
- `max_steps`: -1 |
|
|
- `lr_scheduler_type`: linear |
|
|
- `lr_scheduler_kwargs`: {} |
|
|
- `warmup_ratio`: 0.0 |
|
|
- `warmup_steps`: 0 |
|
|
- `log_level`: passive |
|
|
- `log_level_replica`: warning |
|
|
- `log_on_each_node`: True |
|
|
- `logging_nan_inf_filter`: True |
|
|
- `save_safetensors`: True |
|
|
- `save_on_each_node`: False |
|
|
- `save_only_model`: False |
|
|
- `restore_callback_states_from_checkpoint`: False |
|
|
- `no_cuda`: False |
|
|
- `use_cpu`: False |
|
|
- `use_mps_device`: False |
|
|
- `seed`: 42 |
|
|
- `data_seed`: None |
|
|
- `jit_mode_eval`: False |
|
|
- `use_ipex`: False |
|
|
- `bf16`: True |
|
|
- `fp16`: False |
|
|
- `fp16_opt_level`: O1 |
|
|
- `half_precision_backend`: auto |
|
|
- `bf16_full_eval`: False |
|
|
- `fp16_full_eval`: False |
|
|
- `tf32`: None |
|
|
- `local_rank`: 0 |
|
|
- `ddp_backend`: None |
|
|
- `tpu_num_cores`: None |
|
|
- `tpu_metrics_debug`: False |
|
|
- `debug`: [] |
|
|
- `dataloader_drop_last`: False |
|
|
- `dataloader_num_workers`: 0 |
|
|
- `dataloader_prefetch_factor`: None |
|
|
- `past_index`: -1 |
|
|
- `disable_tqdm`: False |
|
|
- `remove_unused_columns`: True |
|
|
- `label_names`: None |
|
|
- `load_best_model_at_end`: False |
|
|
- `ignore_data_skip`: False |
|
|
- `fsdp`: [] |
|
|
- `fsdp_min_num_params`: 0 |
|
|
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} |
|
|
- `fsdp_transformer_layer_cls_to_wrap`: None |
|
|
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} |
|
|
- `deepspeed`: None |
|
|
- `label_smoothing_factor`: 0.0 |
|
|
- `optim`: adamw_torch |
|
|
- `optim_args`: None |
|
|
- `adafactor`: False |
|
|
- `group_by_length`: False |
|
|
- `length_column_name`: length |
|
|
- `ddp_find_unused_parameters`: None |
|
|
- `ddp_bucket_cap_mb`: None |
|
|
- `ddp_broadcast_buffers`: False |
|
|
- `dataloader_pin_memory`: True |
|
|
- `dataloader_persistent_workers`: False |
|
|
- `skip_memory_metrics`: True |
|
|
- `use_legacy_prediction_loop`: False |
|
|
- `push_to_hub`: False |
|
|
- `resume_from_checkpoint`: None |
|
|
- `hub_model_id`: None |
|
|
- `hub_strategy`: every_save |
|
|
- `hub_private_repo`: None |
|
|
- `hub_always_push`: False |
|
|
- `gradient_checkpointing`: False |
|
|
- `gradient_checkpointing_kwargs`: None |
|
|
- `include_inputs_for_metrics`: False |
|
|
- `include_for_metrics`: [] |
|
|
- `eval_do_concat_batches`: True |
|
|
- `fp16_backend`: auto |
|
|
- `push_to_hub_model_id`: None |
|
|
- `push_to_hub_organization`: None |
|
|
- `mp_parameters`: |
|
|
- `auto_find_batch_size`: False |
|
|
- `full_determinism`: False |
|
|
- `torchdynamo`: None |
|
|
- `ray_scope`: last |
|
|
- `ddp_timeout`: 1800 |
|
|
- `torch_compile`: False |
|
|
- `torch_compile_backend`: None |
|
|
- `torch_compile_mode`: None |
|
|
- `include_tokens_per_second`: False |
|
|
- `include_num_input_tokens_seen`: False |
|
|
- `neftune_noise_alpha`: None |
|
|
- `optim_target_modules`: None |
|
|
- `batch_eval_metrics`: False |
|
|
- `eval_on_start`: False |
|
|
- `use_liger_kernel`: False |
|
|
- `eval_use_gather_object`: False |
|
|
- `average_tokens_across_devices`: False |
|
|
- `prompts`: None |
|
|
- `batch_sampler`: batch_sampler |
|
|
- `multi_dataset_batch_sampler`: proportional |
|
|
|
|
|
</details> |
|
|
|
|
|
|
|
|
|
|
|
<!-- |
|
|
## Glossary |
|
|
|
|
|
*Clearly define terms in order to be accessible across audiences.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
## Model Card Authors |
|
|
|
|
|
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
## Model Card Contact |
|
|
|
|
|
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
|
|
--> |