software-si commited on
Commit
e042b73
·
verified ·
1 Parent(s): 62b7c3e

Add new CrossEncoder model

Browse files
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - cross-encoder
5
+ - reranker
6
+ pipeline_tag: text-classification
7
+ library_name: sentence-transformers
8
+ ---
9
+
10
+ # CrossEncoder
11
+
12
+ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model trained using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text pair classification.
13
+
14
+ ## Model Details
15
+
16
+ ### Model Description
17
+ - **Model Type:** Cross Encoder
18
+ <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
19
+ - **Maximum Sequence Length:** 512 tokens
20
+ - **Number of Output Labels:** 3 labels
21
+ <!-- - **Training Dataset:** Unknown -->
22
+ <!-- - **Language:** Unknown -->
23
+ <!-- - **License:** Unknown -->
24
+
25
+ ### Model Sources
26
+
27
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
28
+ - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
29
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
30
+ - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
31
+
32
+ ## Usage
33
+
34
+ ### Direct Usage (Sentence Transformers)
35
+
36
+ First install the Sentence Transformers library:
37
+
38
+ ```bash
39
+ pip install -U sentence-transformers
40
+ ```
41
+
42
+ Then you can load this model and run inference.
43
+ ```python
44
+ from sentence_transformers import CrossEncoder
45
+
46
+ # Download from the 🤗 Hub
47
+ model = CrossEncoder("software-si/kitchen-it-nli-deberta")
48
+ # Get scores for pairs of texts
49
+ pairs = [
50
+ ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
51
+ ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
52
+ ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
53
+ ]
54
+ scores = model.predict(pairs)
55
+ print(scores.shape)
56
+ # (3, 3)
57
+ ```
58
+
59
+ <!--
60
+ ### Direct Usage (Transformers)
61
+
62
+ <details><summary>Click to see the direct usage in Transformers</summary>
63
+
64
+ </details>
65
+ -->
66
+
67
+ <!--
68
+ ### Downstream Usage (Sentence Transformers)
69
+
70
+ You can finetune this model on your own dataset.
71
+
72
+ <details><summary>Click to expand</summary>
73
+
74
+ </details>
75
+ -->
76
+
77
+ <!--
78
+ ### Out-of-Scope Use
79
+
80
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
81
+ -->
82
+
83
+ <!--
84
+ ## Bias, Risks and Limitations
85
+
86
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
87
+ -->
88
+
89
+ <!--
90
+ ### Recommendations
91
+
92
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
93
+ -->
94
+
95
+ ## Training Details
96
+
97
+ ### Framework Versions
98
+ - Python: 3.12.3
99
+ - Sentence Transformers: 5.1.1
100
+ - Transformers: 4.56.2
101
+ - PyTorch: 2.8.0+cu128
102
+ - Accelerate: 1.10.1
103
+ - Datasets: 4.1.1
104
+ - Tokenizers: 0.22.1
105
+
106
+ ## Citation
107
+
108
+ ### BibTeX
109
+
110
+ <!--
111
+ ## Glossary
112
+
113
+ *Clearly define terms in order to be accessible across audiences.*
114
+ -->
115
+
116
+ <!--
117
+ ## Model Card Authors
118
+
119
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
120
+ -->
121
+
122
+ <!--
123
+ ## Model Card Contact
124
+
125
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
126
+ -->
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[MASK]": 128000
3
+ }
config.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "DebertaV2ForSequenceClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 1,
7
+ "dtype": "float32",
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "contradiction",
14
+ "1": "entailment",
15
+ "2": "neutral"
16
+ },
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 3072,
19
+ "label2id": {
20
+ "contradiction": 0,
21
+ "entailment": 1,
22
+ "neutral": 2
23
+ },
24
+ "layer_norm_eps": 1e-07,
25
+ "legacy": true,
26
+ "max_position_embeddings": 512,
27
+ "max_relative_positions": -1,
28
+ "model_type": "deberta-v2",
29
+ "norm_rel_ebd": "layer_norm",
30
+ "num_attention_heads": 12,
31
+ "num_hidden_layers": 12,
32
+ "pad_token_id": 0,
33
+ "pooler_dropout": 0,
34
+ "pooler_hidden_act": "gelu",
35
+ "pooler_hidden_size": 768,
36
+ "pos_att_type": [
37
+ "p2c",
38
+ "c2p"
39
+ ],
40
+ "position_biased_input": false,
41
+ "position_buckets": 256,
42
+ "relative_attention": true,
43
+ "sentence_transformers": {
44
+ "activation_fn": "torch.nn.modules.linear.Identity",
45
+ "version": "5.1.1"
46
+ },
47
+ "share_att_key": true,
48
+ "transformers_version": "4.56.2",
49
+ "type_vocab_size": 0,
50
+ "vocab_size": 128100
51
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16028d35b81961705ee424e1da181699ac91d0490db359fda8c2ffb52e37f502
3
+ size 737722356
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "[CLS]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "[SEP]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "[MASK]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "[PAD]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "[SEP]",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c679fbf93643d19aab7ee10c0b99e460bdbc02fedf34b92b05af343b4af586fd
3
+ size 2464616
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[CLS]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[SEP]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[UNK]",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "128000": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "[CLS]",
45
+ "clean_up_tokenization_spaces": false,
46
+ "cls_token": "[CLS]",
47
+ "do_lower_case": false,
48
+ "eos_token": "[SEP]",
49
+ "extra_special_tokens": {},
50
+ "mask_token": "[MASK]",
51
+ "max_length": 128,
52
+ "model_max_length": 512,
53
+ "pad_to_multiple_of": null,
54
+ "pad_token": "[PAD]",
55
+ "pad_token_type_id": 0,
56
+ "padding_side": "right",
57
+ "sep_token": "[SEP]",
58
+ "sp_model_kwargs": {},
59
+ "split_by_punct": false,
60
+ "stride": 0,
61
+ "tokenizer_class": "DebertaV2Tokenizer",
62
+ "truncation_side": "right",
63
+ "truncation_strategy": "longest_first",
64
+ "unk_token": "[UNK]",
65
+ "vocab_type": "spm"
66
+ }