yjoonjang commited on
Commit
2e1540a
·
verified ·
1 Parent(s): 1805bb7

Add new SparseqSentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
2_Dense/config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "in_features": 1024,
3
+ "out_features": 512,
4
+ "bias": true,
5
+ "activation_function": "torch.nn.modules.activation.GELU"
6
+ }
2_Dense/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:648edbd537dae24a35171c85a5b96b6908fb9d070e92c8b4bdb191c686ea1c5a
3
+ size 2099360
3_Dense/config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "in_features": 512,
3
+ "out_features": 1024,
4
+ "bias": true,
5
+ "activation_function": "torch.nn.modules.linear.Identity"
6
+ }
3_Dense/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:131fc72025c5dd896fb6bcb5c1db387cf054d772da688bc006a0ed38161f9949
3
+ size 2101408
README.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - Sentence Transformers
4
+ - sentence-similarity
5
+ - sentence-transformers
6
+ language:
7
+ - en
8
+ license: mit
9
+ ---
10
+ # E5-large-unsupervised
11
+
12
+ **This model is similar to [e5-large](https://huggingface.co/intfloat/e5-large) but without supervised fine-tuning.**
13
+
14
+ [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
15
+ Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
16
+
17
+ This model has 24 layers and the embedding size is 1024.
18
+
19
+ ## Usage
20
+
21
+ Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
22
+
23
+ ```python
24
+ import torch.nn.functional as F
25
+
26
+ from torch import Tensor
27
+ from transformers import AutoTokenizer, AutoModel
28
+
29
+
30
+ def average_pool(last_hidden_states: Tensor,
31
+ attention_mask: Tensor) -> Tensor:
32
+ last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
33
+ return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
34
+
35
+
36
+ # Each input text should start with "query: " or "passage: ".
37
+ # For tasks other than retrieval, you can simply use the "query: " prefix.
38
+ input_texts = ['query: how much protein should a female eat',
39
+ 'query: summit define',
40
+ "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
41
+ "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
42
+
43
+ tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large-unsupervised')
44
+ model = AutoModel.from_pretrained('intfloat/e5-large-unsupervised')
45
+
46
+ # Tokenize the input texts
47
+ batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
48
+
49
+ outputs = model(**batch_dict)
50
+ embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
51
+
52
+ # normalize embeddings
53
+ embeddings = F.normalize(embeddings, p=2, dim=1)
54
+ scores = (embeddings[:2] @ embeddings[2:].T) * 100
55
+ print(scores.tolist())
56
+ ```
57
+
58
+ ## Training Details
59
+
60
+ Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
61
+
62
+ ## Benchmark Evaluation
63
+
64
+ Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
65
+ on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
66
+
67
+ ## Support for Sentence Transformers
68
+
69
+ Below is an example for usage with sentence_transformers.
70
+ ```python
71
+ from sentence_transformers import SentenceTransformer
72
+ model = SentenceTransformer('intfloat/e5-large-unsupervised')
73
+ input_texts = [
74
+ 'query: how much protein should a female eat',
75
+ 'query: summit define',
76
+ "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
77
+ "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
78
+ ]
79
+ embeddings = model.encode(input_texts, normalize_embeddings=True)
80
+ ```
81
+
82
+ Package requirements
83
+
84
+ `pip install sentence_transformers~=2.2.2`
85
+
86
+ Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
87
+
88
+ ## FAQ
89
+
90
+ **1. Do I need to add the prefix "query: " and "passage: " to input texts?**
91
+
92
+ Yes, this is how the model is trained, otherwise you will see a performance degradation.
93
+
94
+ Here are some rules of thumb:
95
+ - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
96
+
97
+ - Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval.
98
+
99
+ - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
100
+
101
+ **2. Why are my reproduced results slightly different from reported in the model card?**
102
+
103
+ Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
104
+
105
+ ## Citation
106
+
107
+ If you find our paper or models helpful, please consider cite as follows:
108
+
109
+ ```
110
+ @article{wang2022text,
111
+ title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
112
+ author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
113
+ journal={arXiv preprint arXiv:2212.03533},
114
+ year={2022}
115
+ }
116
+ ```
117
+
118
+ ## Limitations
119
+
120
+ This model only works for English texts. Long texts will be truncated to at most 512 tokens.
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "dtype": "float32",
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "transformers_version": "4.56.0",
21
+ "type_vocab_size": 2,
22
+ "use_cache": true,
23
+ "vocab_size": 30522
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SentenceTransformer",
3
+ "__version__": {
4
+ "sentence_transformers": "5.1.0",
5
+ "transformers": "4.56.0",
6
+ "pytorch": "2.8.0+cu128"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "cosine"
14
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:067431516e94d3a77c205cc723f93d32c126cc08e87d7532a97632b014a77f68
3
+ size 1340612432
modules.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Dense",
18
+ "type": "sentence_transformers.models.Dense"
19
+ },
20
+ {
21
+ "idx": 3,
22
+ "name": "3",
23
+ "path": "3_Dense",
24
+ "type": "sentence_transformers.models.Dense"
25
+ }
26
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "BertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff