elderberry17 commited on
Commit
d35338d
·
verified ·
1 Parent(s): 2fe0231

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +2 -2
  2. sentence_bert_config.json +1 -1
  3. tokenizer_config.json +1 -1
README.md CHANGED
@@ -17,7 +17,7 @@ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [c
17
  ### Model Description
18
  - **Model Type:** Sentence Transformer
19
  - **Base model:** [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) <!-- at revision 5441c5ea8026d4f6d7505ec004845409f1259fb1 -->
20
- - **Maximum Sequence Length:** 256 tokens
21
  - **Output Dimensionality:** 312 tokens
22
  - **Similarity Function:** Cosine Similarity
23
  <!-- - **Training Dataset:** Unknown -->
@@ -34,7 +34,7 @@ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [c
34
 
35
  ```
36
  SentenceTransformer(
37
- (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
38
  (1): Pooling({'word_embedding_dimension': 312, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
39
  )
40
  ```
 
17
  ### Model Description
18
  - **Model Type:** Sentence Transformer
19
  - **Base model:** [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) <!-- at revision 5441c5ea8026d4f6d7505ec004845409f1259fb1 -->
20
+ - **Maximum Sequence Length:** 512 tokens
21
  - **Output Dimensionality:** 312 tokens
22
  - **Similarity Function:** Cosine Similarity
23
  <!-- - **Training Dataset:** Unknown -->
 
34
 
35
  ```
36
  SentenceTransformer(
37
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
38
  (1): Pooling({'word_embedding_dimension': 312, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
39
  )
40
  ```
sentence_bert_config.json CHANGED
@@ -1,4 +1,4 @@
1
  {
2
- "max_seq_length": 256,
3
  "do_lower_case": false
4
  }
 
1
  {
2
+ "max_seq_length": 512,
3
  "do_lower_case": false
4
  }
tokenizer_config.json CHANGED
@@ -47,7 +47,7 @@
47
  "do_lower_case": false,
48
  "mask_token": "[MASK]",
49
  "max_length": 512,
50
- "model_max_length": 256,
51
  "never_split": null,
52
  "pad_to_multiple_of": null,
53
  "pad_token": "[PAD]",
 
47
  "do_lower_case": false,
48
  "mask_token": "[MASK]",
49
  "max_length": 512,
50
+ "model_max_length": 512,
51
  "never_split": null,
52
  "pad_to_multiple_of": null,
53
  "pad_token": "[PAD]",