Add new SentenceTransformer model.
Browse files- 0_WordEmbeddings/pytorch_model.bin +3 -0
- 0_WordEmbeddings/whitespacetokenizer_config.json +0 -0
- 0_WordEmbeddings/wordembedding_config.json +5 -0
- 1_Pooling/config.json +10 -0
- README.md +65 -0
- config_sentence_transformers.json +9 -0
- modules.json +14 -0
0_WordEmbeddings/pytorch_model.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3abbc03165d917294d7ccda9be32a5d5287aaf9939a728a8a94579be5d1dffa2
|
| 3 |
+
size 710532090
|
0_WordEmbeddings/whitespacetokenizer_config.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
0_WordEmbeddings/wordembedding_config.json
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"tokenizer_class": "sentence_transformers.models.tokenizer.WhitespaceTokenizer.WhitespaceTokenizer",
|
| 3 |
+
"update_embeddings": false,
|
| 4 |
+
"max_seq_length": 1000000
|
| 5 |
+
}
|
1_Pooling/config.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"word_embedding_dimension": 300,
|
| 3 |
+
"pooling_mode_cls_token": false,
|
| 4 |
+
"pooling_mode_mean_tokens": true,
|
| 5 |
+
"pooling_mode_max_tokens": false,
|
| 6 |
+
"pooling_mode_mean_sqrt_len_tokens": false,
|
| 7 |
+
"pooling_mode_weightedmean_tokens": false,
|
| 8 |
+
"pooling_mode_lasttoken": false,
|
| 9 |
+
"include_prompt": true
|
| 10 |
+
}
|
README.md
ADDED
|
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: sentence-transformers
|
| 3 |
+
pipeline_tag: sentence-similarity
|
| 4 |
+
tags:
|
| 5 |
+
- sentence-transformers
|
| 6 |
+
- feature-extraction
|
| 7 |
+
- sentence-similarity
|
| 8 |
+
language:
|
| 9 |
+
- pt
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# mteb-pt/average_fasttext_wiki.pt.300
|
| 13 |
+
|
| 14 |
+
This is an adaptation of pre-trained Portuguese fastText Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model.
|
| 15 |
+
|
| 16 |
+
The original pre-trained word embeddings can be found at: [https://fasttext.cc/docs/en/pretrained-vectors.html](https://fasttext.cc/docs/en/pretrained-vectors.html).
|
| 17 |
+
|
| 18 |
+
This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
| 19 |
+
|
| 20 |
+
## Usage (Sentence-Transformers)
|
| 21 |
+
|
| 22 |
+
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
|
| 23 |
+
|
| 24 |
+
```
|
| 25 |
+
pip install -U sentence-transformers
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
Then you can use the model like this:
|
| 29 |
+
|
| 30 |
+
```python
|
| 31 |
+
from sentence_transformers import SentenceTransformer
|
| 32 |
+
sentences = ["This is an example sentence", "Each sentence is converted"]
|
| 33 |
+
|
| 34 |
+
model = SentenceTransformer('mteb-pt/average_fasttext_wiki.pt.300')
|
| 35 |
+
embeddings = model.encode(sentences)
|
| 36 |
+
print(embeddings)
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
## Evaluation Results
|
| 40 |
+
|
| 41 |
+
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard)
|
| 42 |
+
|
| 43 |
+
## Full Model Architecture
|
| 44 |
+
```
|
| 45 |
+
SentenceTransformer(
|
| 46 |
+
(0): WordEmbeddings(
|
| 47 |
+
(emb_layer): Embedding(592109, 300)
|
| 48 |
+
)
|
| 49 |
+
(1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
|
| 50 |
+
)
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
## Citing & Authors
|
| 54 |
+
|
| 55 |
+
```bibtex
|
| 56 |
+
@article{bojanowski2017enriching,
|
| 57 |
+
title={Enriching Word Vectors with Subword Information},
|
| 58 |
+
author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas},
|
| 59 |
+
journal={Transactions of the Association for Computational Linguistics},
|
| 60 |
+
volume={5},
|
| 61 |
+
year={2017},
|
| 62 |
+
issn={2307-387X},
|
| 63 |
+
pages={135--146}
|
| 64 |
+
}
|
| 65 |
+
```
|
config_sentence_transformers.json
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"__version__": {
|
| 3 |
+
"sentence_transformers": "2.6.1",
|
| 4 |
+
"transformers": "4.39.0.dev0",
|
| 5 |
+
"pytorch": "2.2.2"
|
| 6 |
+
},
|
| 7 |
+
"prompts": {},
|
| 8 |
+
"default_prompt_name": null
|
| 9 |
+
}
|
modules.json
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"idx": 0,
|
| 4 |
+
"name": "0",
|
| 5 |
+
"path": "0_WordEmbeddings",
|
| 6 |
+
"type": "sentence_transformers.models.WordEmbeddings"
|
| 7 |
+
},
|
| 8 |
+
{
|
| 9 |
+
"idx": 1,
|
| 10 |
+
"name": "1",
|
| 11 |
+
"path": "1_Pooling",
|
| 12 |
+
"type": "sentence_transformers.models.Pooling"
|
| 13 |
+
}
|
| 14 |
+
]
|