Delete sentencet5-xxl
Browse files- sentencet5-xxl/.gitattributes +0 -28
- sentencet5-xxl/1_Pooling/config.json +0 -7
- sentencet5-xxl/2_Dense/config.json +0 -1
- sentencet5-xxl/2_Dense/model.safetensors +0 -3
- sentencet5-xxl/2_Dense/pytorch_model.bin +0 -3
- sentencet5-xxl/README.md +0 -45
- sentencet5-xxl/config.json +0 -57
- sentencet5-xxl/config_sentence_transformers.json +0 -7
- sentencet5-xxl/model.safetensors +0 -3
- sentencet5-xxl/modules.json +0 -26
- sentencet5-xxl/pytorch_model.bin +0 -3
- sentencet5-xxl/sentence_bert_config.json +0 -4
- sentencet5-xxl/special_tokens_map.json +0 -1
- sentencet5-xxl/spiece.model +0 -3
- sentencet5-xxl/tokenizer.json +0 -0
- sentencet5-xxl/tokenizer_config.json +0 -1
sentencet5-xxl/.gitattributes
DELETED
|
@@ -1,28 +0,0 @@
|
|
| 1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
-
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
| 5 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
| 12 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 13 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 14 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 15 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 16 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 17 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 18 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 19 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 20 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 21 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 22 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 23 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 24 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 25 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 26 |
-
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
| 27 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
-
model.safetensors filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
sentencet5-xxl/1_Pooling/config.json
DELETED
|
@@ -1,7 +0,0 @@
|
|
| 1 |
-
{
|
| 2 |
-
"word_embedding_dimension": 768,
|
| 3 |
-
"pooling_mode_cls_token": false,
|
| 4 |
-
"pooling_mode_mean_tokens": true,
|
| 5 |
-
"pooling_mode_max_tokens": false,
|
| 6 |
-
"pooling_mode_mean_sqrt_len_tokens": false
|
| 7 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
sentencet5-xxl/2_Dense/config.json
DELETED
|
@@ -1 +0,0 @@
|
|
| 1 |
-
{"in_features": 1024, "out_features": 768, "bias": false, "activation_function": "torch.nn.modules.linear.Identity"}
|
|
|
|
|
|
sentencet5-xxl/2_Dense/model.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:e53b7fcbca02a1e966ed3eeb433b765d30af989b6b6aec44bc6809156456cee3
|
| 3 |
-
size 3145848
|
|
|
|
|
|
|
|
|
|
|
|
sentencet5-xxl/2_Dense/pytorch_model.bin
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:fb330464adabfc4f532e691fe95c62752fb6237748b4617b83fb426b0d427c04
|
| 3 |
-
size 3146680
|
|
|
|
|
|
|
|
|
|
|
|
sentencet5-xxl/README.md
DELETED
|
@@ -1,45 +0,0 @@
|
|
| 1 |
-
---
|
| 2 |
-
language: en
|
| 3 |
-
license: apache-2.0
|
| 4 |
-
library_name: sentence-transformers
|
| 5 |
-
tags:
|
| 6 |
-
- sentence-transformers
|
| 7 |
-
- feature-extraction
|
| 8 |
-
- sentence-similarity
|
| 9 |
-
pipeline_tag: sentence-similarity
|
| 10 |
-
---
|
| 11 |
-
|
| 12 |
-
# sentence-transformers/sentence-t5-xxl
|
| 13 |
-
|
| 14 |
-
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model works well for sentence similarity tasks, but doesn't perform that well for semantic search tasks.
|
| 15 |
-
|
| 16 |
-
This model was converted from the Tensorflow model [st5-11b-1](https://tfhub.dev/google/sentence-t5/st5-11b/1) to PyTorch. When using this model, have a look at the publication: [Sentence-T5: Scalable sentence encoders from pre-trained text-to-text models](https://arxiv.org/abs/2108.08877). The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results.
|
| 17 |
-
|
| 18 |
-
The model uses only the encoder from a T5-11B model. The weights are stored in FP16.
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
## Usage (Sentence-Transformers)
|
| 22 |
-
|
| 23 |
-
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
|
| 24 |
-
|
| 25 |
-
```
|
| 26 |
-
pip install -U sentence-transformers
|
| 27 |
-
```
|
| 28 |
-
|
| 29 |
-
Then you can use the model like this:
|
| 30 |
-
|
| 31 |
-
```python
|
| 32 |
-
from sentence_transformers import SentenceTransformer
|
| 33 |
-
sentences = ["This is an example sentence", "Each sentence is converted"]
|
| 34 |
-
|
| 35 |
-
model = SentenceTransformer('sentence-transformers/sentence-t5-xxl')
|
| 36 |
-
embeddings = model.encode(sentences)
|
| 37 |
-
print(embeddings)
|
| 38 |
-
```
|
| 39 |
-
|
| 40 |
-
The model requires sentence-transformers version 2.2.0 or newer.
|
| 41 |
-
|
| 42 |
-
## Citing & Authors
|
| 43 |
-
|
| 44 |
-
If you find this model helpful, please cite the respective publication:
|
| 45 |
-
[Sentence-T5: Scalable sentence encoders from pre-trained text-to-text models](https://arxiv.org/abs/2108.08877)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
sentencet5-xxl/config.json
DELETED
|
@@ -1,57 +0,0 @@
|
|
| 1 |
-
{
|
| 2 |
-
"_name_or_path": "models/sentence-t5-11b",
|
| 3 |
-
"architectures": [
|
| 4 |
-
"T5EncoderModel"
|
| 5 |
-
],
|
| 6 |
-
"d_ff": 65536,
|
| 7 |
-
"d_kv": 128,
|
| 8 |
-
"d_model": 1024,
|
| 9 |
-
"decoder_start_token_id": 0,
|
| 10 |
-
"dropout_rate": 0.1,
|
| 11 |
-
"eos_token_id": 1,
|
| 12 |
-
"feed_forward_proj": "relu",
|
| 13 |
-
"initializer_factor": 1.0,
|
| 14 |
-
"is_encoder_decoder": true,
|
| 15 |
-
"layer_norm_epsilon": 1e-06,
|
| 16 |
-
"model_type": "t5",
|
| 17 |
-
"n_positions": 512,
|
| 18 |
-
"num_decoder_layers": 24,
|
| 19 |
-
"num_heads": 128,
|
| 20 |
-
"num_layers": 24,
|
| 21 |
-
"output_past": true,
|
| 22 |
-
"pad_token_id": 0,
|
| 23 |
-
"relative_attention_num_buckets": 32,
|
| 24 |
-
"task_specific_params": {
|
| 25 |
-
"summarization": {
|
| 26 |
-
"early_stopping": true,
|
| 27 |
-
"length_penalty": 2.0,
|
| 28 |
-
"max_length": 200,
|
| 29 |
-
"min_length": 30,
|
| 30 |
-
"no_repeat_ngram_size": 3,
|
| 31 |
-
"num_beams": 4,
|
| 32 |
-
"prefix": "summarize: "
|
| 33 |
-
},
|
| 34 |
-
"translation_en_to_de": {
|
| 35 |
-
"early_stopping": true,
|
| 36 |
-
"max_length": 300,
|
| 37 |
-
"num_beams": 4,
|
| 38 |
-
"prefix": "translate English to German: "
|
| 39 |
-
},
|
| 40 |
-
"translation_en_to_fr": {
|
| 41 |
-
"early_stopping": true,
|
| 42 |
-
"max_length": 300,
|
| 43 |
-
"num_beams": 4,
|
| 44 |
-
"prefix": "translate English to French: "
|
| 45 |
-
},
|
| 46 |
-
"translation_en_to_ro": {
|
| 47 |
-
"early_stopping": true,
|
| 48 |
-
"max_length": 300,
|
| 49 |
-
"num_beams": 4,
|
| 50 |
-
"prefix": "translate English to Romanian: "
|
| 51 |
-
}
|
| 52 |
-
},
|
| 53 |
-
"torch_dtype": "float16",
|
| 54 |
-
"transformers_version": "4.11.3",
|
| 55 |
-
"use_cache": true,
|
| 56 |
-
"vocab_size": 32128
|
| 57 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
sentencet5-xxl/config_sentence_transformers.json
DELETED
|
@@ -1,7 +0,0 @@
|
|
| 1 |
-
{
|
| 2 |
-
"__version__": {
|
| 3 |
-
"sentence_transformers": "2.2.0",
|
| 4 |
-
"transformers": "4.7.0",
|
| 5 |
-
"pytorch": "1.9.0+cu102"
|
| 6 |
-
}
|
| 7 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
sentencet5-xxl/model.safetensors
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:87dba079720f799b443b54508c7288a1e9b3d02d42cbee4f35df401914499294
|
| 3 |
-
size 9729607168
|
|
|
|
|
|
|
|
|
|
|
|
sentencet5-xxl/modules.json
DELETED
|
@@ -1,26 +0,0 @@
|
|
| 1 |
-
[
|
| 2 |
-
{
|
| 3 |
-
"idx": 0,
|
| 4 |
-
"name": "0",
|
| 5 |
-
"path": "",
|
| 6 |
-
"type": "sentence_transformers.models.Transformer"
|
| 7 |
-
},
|
| 8 |
-
{
|
| 9 |
-
"idx": 1,
|
| 10 |
-
"name": "1",
|
| 11 |
-
"path": "1_Pooling",
|
| 12 |
-
"type": "sentence_transformers.models.Pooling"
|
| 13 |
-
},
|
| 14 |
-
{
|
| 15 |
-
"idx": 2,
|
| 16 |
-
"name": "2",
|
| 17 |
-
"path": "2_Dense",
|
| 18 |
-
"type": "sentence_transformers.models.Dense"
|
| 19 |
-
},
|
| 20 |
-
{
|
| 21 |
-
"idx": 3,
|
| 22 |
-
"name": "3",
|
| 23 |
-
"path": "3_Normalize",
|
| 24 |
-
"type": "sentence_transformers.models.Normalize"
|
| 25 |
-
}
|
| 26 |
-
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
sentencet5-xxl/pytorch_model.bin
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:fc73ed70fbb704fa901846c34169c2e31d8cc4d9dbcfaa29f668df164a5d0e0a
|
| 3 |
-
size 9729676422
|
|
|
|
|
|
|
|
|
|
|
|
sentencet5-xxl/sentence_bert_config.json
DELETED
|
@@ -1,4 +0,0 @@
|
|
| 1 |
-
{
|
| 2 |
-
"max_seq_length": 256,
|
| 3 |
-
"do_lower_case": false
|
| 4 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
sentencet5-xxl/special_tokens_map.json
DELETED
|
@@ -1 +0,0 @@
|
|
| 1 |
-
{"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>", "additional_special_tokens": ["<extra_id_0>", "<extra_id_1>", "<extra_id_2>", "<extra_id_3>", "<extra_id_4>", "<extra_id_5>", "<extra_id_6>", "<extra_id_7>", "<extra_id_8>", "<extra_id_9>", "<extra_id_10>", "<extra_id_11>", "<extra_id_12>", "<extra_id_13>", "<extra_id_14>", "<extra_id_15>", "<extra_id_16>", "<extra_id_17>", "<extra_id_18>", "<extra_id_19>", "<extra_id_20>", "<extra_id_21>", "<extra_id_22>", "<extra_id_23>", "<extra_id_24>", "<extra_id_25>", "<extra_id_26>", "<extra_id_27>", "<extra_id_28>", "<extra_id_29>", "<extra_id_30>", "<extra_id_31>", "<extra_id_32>", "<extra_id_33>", "<extra_id_34>", "<extra_id_35>", "<extra_id_36>", "<extra_id_37>", "<extra_id_38>", "<extra_id_39>", "<extra_id_40>", "<extra_id_41>", "<extra_id_42>", "<extra_id_43>", "<extra_id_44>", "<extra_id_45>", "<extra_id_46>", "<extra_id_47>", "<extra_id_48>", "<extra_id_49>", "<extra_id_50>", "<extra_id_51>", "<extra_id_52>", "<extra_id_53>", "<extra_id_54>", "<extra_id_55>", "<extra_id_56>", "<extra_id_57>", "<extra_id_58>", "<extra_id_59>", "<extra_id_60>", "<extra_id_61>", "<extra_id_62>", "<extra_id_63>", "<extra_id_64>", "<extra_id_65>", "<extra_id_66>", "<extra_id_67>", "<extra_id_68>", "<extra_id_69>", "<extra_id_70>", "<extra_id_71>", "<extra_id_72>", "<extra_id_73>", "<extra_id_74>", "<extra_id_75>", "<extra_id_76>", "<extra_id_77>", "<extra_id_78>", "<extra_id_79>", "<extra_id_80>", "<extra_id_81>", "<extra_id_82>", "<extra_id_83>", "<extra_id_84>", "<extra_id_85>", "<extra_id_86>", "<extra_id_87>", "<extra_id_88>", "<extra_id_89>", "<extra_id_90>", "<extra_id_91>", "<extra_id_92>", "<extra_id_93>", "<extra_id_94>", "<extra_id_95>", "<extra_id_96>", "<extra_id_97>", "<extra_id_98>", "<extra_id_99>"]}
|
|
|
|
|
|
sentencet5-xxl/spiece.model
DELETED
|
@@ -1,3 +0,0 @@
|
|
| 1 |
-
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:d60acb128cf7b7f2536e8f38a5b18a05535c9e14c7a355904270e15b0945ea86
|
| 3 |
-
size 791656
|
|
|
|
|
|
|
|
|
|
|
|
sentencet5-xxl/tokenizer.json
DELETED
|
The diff for this file is too large to render.
See raw diff
|
|
|
sentencet5-xxl/tokenizer_config.json
DELETED
|
@@ -1 +0,0 @@
|
|
| 1 |
-
{"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>", "extra_ids": 100, "additional_special_tokens": ["<extra_id_0>", "<extra_id_1>", "<extra_id_2>", "<extra_id_3>", "<extra_id_4>", "<extra_id_5>", "<extra_id_6>", "<extra_id_7>", "<extra_id_8>", "<extra_id_9>", "<extra_id_10>", "<extra_id_11>", "<extra_id_12>", "<extra_id_13>", "<extra_id_14>", "<extra_id_15>", "<extra_id_16>", "<extra_id_17>", "<extra_id_18>", "<extra_id_19>", "<extra_id_20>", "<extra_id_21>", "<extra_id_22>", "<extra_id_23>", "<extra_id_24>", "<extra_id_25>", "<extra_id_26>", "<extra_id_27>", "<extra_id_28>", "<extra_id_29>", "<extra_id_30>", "<extra_id_31>", "<extra_id_32>", "<extra_id_33>", "<extra_id_34>", "<extra_id_35>", "<extra_id_36>", "<extra_id_37>", "<extra_id_38>", "<extra_id_39>", "<extra_id_40>", "<extra_id_41>", "<extra_id_42>", "<extra_id_43>", "<extra_id_44>", "<extra_id_45>", "<extra_id_46>", "<extra_id_47>", "<extra_id_48>", "<extra_id_49>", "<extra_id_50>", "<extra_id_51>", "<extra_id_52>", "<extra_id_53>", "<extra_id_54>", "<extra_id_55>", "<extra_id_56>", "<extra_id_57>", "<extra_id_58>", "<extra_id_59>", "<extra_id_60>", "<extra_id_61>", "<extra_id_62>", "<extra_id_63>", "<extra_id_64>", "<extra_id_65>", "<extra_id_66>", "<extra_id_67>", "<extra_id_68>", "<extra_id_69>", "<extra_id_70>", "<extra_id_71>", "<extra_id_72>", "<extra_id_73>", "<extra_id_74>", "<extra_id_75>", "<extra_id_76>", "<extra_id_77>", "<extra_id_78>", "<extra_id_79>", "<extra_id_80>", "<extra_id_81>", "<extra_id_82>", "<extra_id_83>", "<extra_id_84>", "<extra_id_85>", "<extra_id_86>", "<extra_id_87>", "<extra_id_88>", "<extra_id_89>", "<extra_id_90>", "<extra_id_91>", "<extra_id_92>", "<extra_id_93>", "<extra_id_94>", "<extra_id_95>", "<extra_id_96>", "<extra_id_97>", "<extra_id_98>", "<extra_id_99>"], "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "t5-11b", "tokenizer_class": "T5Tokenizer"}
|
|
|
|
|
|