|
|
--- |
|
|
library_name: sentence-transformers |
|
|
pipeline_tag: sentence-similarity |
|
|
tags: |
|
|
- sentence-transformers |
|
|
- feature-extraction |
|
|
- sentence-similarity |
|
|
language: |
|
|
- pt |
|
|
--- |
|
|
|
|
|
# mteb-pt/average_fasttext_wiki.pt.align.300 |
|
|
|
|
|
This is an adaptation of pre-trained Portuguese fastText Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model. |
|
|
|
|
|
The original pre-trained word embeddings can be found at: [https://fasttext.cc/docs/en/aligned-vectors.html](https://fasttext.cc/docs/en/aligned-vectors.html). |
|
|
|
|
|
This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search. |
|
|
|
|
|
## Usage (Sentence-Transformers) |
|
|
|
|
|
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: |
|
|
|
|
|
``` |
|
|
pip install -U sentence-transformers |
|
|
``` |
|
|
|
|
|
Then you can use the model like this: |
|
|
|
|
|
```python |
|
|
from sentence_transformers import SentenceTransformer |
|
|
sentences = ["This is an example sentence", "Each sentence is converted"] |
|
|
|
|
|
model = SentenceTransformer('mteb-pt/average_fasttext_wiki.pt.align.300') |
|
|
embeddings = model.encode(sentences) |
|
|
print(embeddings) |
|
|
``` |
|
|
|
|
|
## Evaluation Results |
|
|
|
|
|
For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard) |
|
|
|
|
|
## Full Model Architecture |
|
|
``` |
|
|
SentenceTransformer( |
|
|
(0): WordEmbeddings( |
|
|
(emb_layer): Embedding(592109, 300) |
|
|
) |
|
|
(1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) |
|
|
) |
|
|
``` |
|
|
|
|
|
## Citing & Authors |
|
|
|
|
|
```bibtex |
|
|
@InProceedings{joulin2018loss, |
|
|
title={Loss in Translation: Learning Bilingual Word Mapping with a Retrieval Criterion}, |
|
|
author={Joulin, Armand and Bojanowski, Piotr and Mikolov, Tomas and J'egou, Herv'e and Grave, Edouard}, |
|
|
year={2018}, |
|
|
booktitle={Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing}, |
|
|
} |
|
|
|
|
|
@article{bojanowski2017enriching, |
|
|
title={Enriching Word Vectors with Subword Information}, |
|
|
author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas}, |
|
|
journal={Transactions of the Association for Computational Linguistics}, |
|
|
volume={5}, |
|
|
year={2017}, |
|
|
issn={2307-387X}, |
|
|
pages={135--146} |
|
|
} |
|
|
``` |