Sentence Similarity
sentence-transformers
PyTorch
Transformers
roberta
feature-extraction
text-embeddings-inference
Instructions to use AnnaWegmann/Style-Embedding with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use AnnaWegmann/Style-Embedding with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("AnnaWegmann/Style-Embedding") sentences = [ "That is a happy person", "That is a happy dog", "That is a very happy person", "Today is a sunny day" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Transformers
How to use AnnaWegmann/Style-Embedding with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("AnnaWegmann/Style-Embedding") model = AutoModel.from_pretrained("AnnaWegmann/Style-Embedding") - Inference
- Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -127,4 +127,19 @@ SentenceTransformer(
|
|
| 127 |
|
| 128 |
## Citing & Authors
|
| 129 |
|
| 130 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 127 |
|
| 128 |
## Citing & Authors
|
| 129 |
|
| 130 |
+
```
|
| 131 |
+
@inproceedings{wegmann-etal-2022-author,
|
| 132 |
+
title = "Same Author or Just Same Topic? Towards Content-Independent Style Representations",
|
| 133 |
+
author = "Wegmann, Anna and
|
| 134 |
+
Schraagen, Marijn and
|
| 135 |
+
Nguyen, Dong",
|
| 136 |
+
booktitle = "Proceedings of the 7th Workshop on Representation Learning for NLP",
|
| 137 |
+
month = may,
|
| 138 |
+
year = "2022",
|
| 139 |
+
address = "Dublin, Ireland",
|
| 140 |
+
publisher = "Association for Computational Linguistics",
|
| 141 |
+
url = "https://aclanthology.org/2022.repl4nlp-1.26",
|
| 142 |
+
pages = "249--268",
|
| 143 |
+
abstract = "Linguistic style is an integral component of language. Recent advances in the development of style representations have increasingly used training objectives from authorship verification (AV){''}:'' Do two texts have the same author? The assumption underlying the AV training task (same author approximates same writing style) enables self-supervised and, thus, extensive training. However, a good performance on the AV task does not ensure good {``}general-purpose{''} style representations. For example, as the same author might typically write about certain topics, representations trained on AV might also encode content information instead of style alone. We introduce a variation of the AV training task that controls for content using conversation or domain labels. We evaluate whether known style dimensions are represented and preferred over content information through an original variation to the recently proposed STEL framework. We find that representations trained by controlling for conversation are better than representations trained with domain or no content control at representing style independent from content.",
|
| 144 |
+
}
|
| 145 |
+
```
|