MihaBEST cointegrated commited on
Commit
d797a55
·
0 Parent(s):

Duplicate from cointegrated/rubert-tiny2

Browse files

Co-authored-by: David Dale <cointegrated@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ model.safetensors filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ .idea
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 312,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ru
4
+ pipeline_tag: sentence-similarity
5
+ tags:
6
+ - russian
7
+ - fill-mask
8
+ - pretraining
9
+ - embeddings
10
+ - masked-lm
11
+ - tiny
12
+ - feature-extraction
13
+ - sentence-similarity
14
+ - sentence-transformers
15
+ - transformers
16
+ license: mit
17
+ widget:
18
+ - text: Миниатюрная модель для [MASK] разных задач.
19
+ ---
20
+ This is an updated version of [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny): a small Russian BERT-based encoder with high-quality sentence embeddings. This [post in Russian](https://habr.com/ru/post/669674/) gives more details.
21
+
22
+ The differences from the previous version include:
23
+ - a larger vocabulary: 83828 tokens instead of 29564;
24
+ - larger supported sequences: 2048 instead of 512;
25
+ - sentence embeddings approximate LaBSE closer than before;
26
+ - meaningful segment embeddings (tuned on the NLI task)
27
+ - the model is focused only on Russian.
28
+
29
+ The model should be used as is to produce sentence embeddings (e.g. for KNN classification of short texts) or fine-tuned for a downstream task.
30
+
31
+ Sentence embeddings can be produced as follows:
32
+
33
+ ```python
34
+ # pip install transformers sentencepiece
35
+ import torch
36
+ from transformers import AutoTokenizer, AutoModel
37
+ tokenizer = AutoTokenizer.from_pretrained("cointegrated/rubert-tiny2")
38
+ model = AutoModel.from_pretrained("cointegrated/rubert-tiny2")
39
+ # model.cuda() # uncomment it if you have a GPU
40
+
41
+ def embed_bert_cls(text, model, tokenizer):
42
+ t = tokenizer(text, padding=True, truncation=True, return_tensors='pt')
43
+ with torch.no_grad():
44
+ model_output = model(**{k: v.to(model.device) for k, v in t.items()})
45
+ embeddings = model_output.last_hidden_state[:, 0, :]
46
+ embeddings = torch.nn.functional.normalize(embeddings)
47
+ return embeddings[0].cpu().numpy()
48
+
49
+ print(embed_bert_cls('привет мир', model, tokenizer).shape)
50
+ # (312,)
51
+ ```
52
+
53
+ Alternatively, you can use the model with `sentence_transformers`:
54
+ ```Python
55
+ from sentence_transformers import SentenceTransformer
56
+ model = SentenceTransformer('cointegrated/rubert-tiny2')
57
+ sentences = ["привет мир", "hello world", "здравствуй вселенная"]
58
+ embeddings = model.encode(sentences)
59
+ print(embeddings)
60
+ ```
61
+
62
+ For those who want to run the inference with [VLLM](https://docs.vllm.ai/en/latest/), there is a vLLM-optimized version of this model: [WpythonW/rubert-tiny2-vllm](https://huggingface.co/WpythonW/rubert-tiny2-vllm)
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "cointegrated/rubert-tiny2",
3
+ "architectures": [
4
+ "BertForPreTraining"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "emb_size": 312,
9
+ "gradient_checkpointing": false,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 312,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 600,
15
+ "layer_norm_eps": 1e-12,
16
+ "max_position_embeddings": 2048,
17
+ "model_type": "bert",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 3,
20
+ "pad_token_id": 0,
21
+ "position_embedding_type": "absolute",
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.12.3",
24
+ "type_vocab_size": 2,
25
+ "use_cache": true,
26
+ "vocab_size": 83828
27
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26ebb6db2a68593c54c74902d7a74f332da66297693f965cc9f1b0af4abf3894
3
+ size 117529600
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:137fa2b1d944dae19c74456dfe8fac2f780d9acf34e037f5d1e37acba1157768
3
+ size 117546024
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 2048,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tinybert-ru-labse-adapter-v2.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3322adeaf437ec8005bd042a64f501458abd5ac58a2eb13f09df5ed9ba59a9af
3
+ size 962983
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": false, "do_basic_tokenize": true, "never_split": null, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 2048, "special_tokens_map_file": null, "name_or_path": "/gd/MyDrive/models/rubert-tiny-mlm-nli-sentence", "tokenizer_class": "BertTokenizer"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff