Shawnlight commited on
Commit
d2a579d
·
verified ·
1 Parent(s): 6b9823b

Initial upload

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: FacebookAI/roberta-base
3
+ datasets:
4
+ - SynthSTEL/styledistance_training_triplets
5
+ - StyleDistance/synthstel
6
+ language:
7
+ - en
8
+ library_name: sentence-transformers
9
+ license: mit
10
+ pipeline_tag: sentence-similarity
11
+ tags:
12
+ - datadreamer
13
+ - datadreamer-0.35.0
14
+ - synthetic
15
+ - sentence-transformers
16
+ - feature-extraction
17
+ - sentence-similarity
18
+ widget:
19
+ - example_title: Example 1
20
+ source_sentence: Did you hear about the Wales wing? He'll h8 2 withdraw due 2 injuries
21
+ from future competitions.
22
+ sentences:
23
+ - We're raising funds 2 improve our school's storage facilities and add new playground
24
+ equipment!
25
+ - Did you hear about the Wales wing? He'll hate to withdraw due to injuries from
26
+ future competitions.
27
+ - example_title: Example 2
28
+ source_sentence: You planned the DesignMeets Decades of Design event; you executed
29
+ it perfectly.
30
+ sentences:
31
+ - We'll find it hard to prove the thief didn't face a real threat!
32
+ - You orchestrated the DesignMeets Decades of Design gathering; you actualized it
33
+ flawlessly.
34
+ - example_title: Example 3
35
+ source_sentence: Did the William Barr maintain a commitment to allow Robert Mueller
36
+ to finish the inquiry?\
37
+ sentences:
38
+ - Will the artist be compiling a music album, or will there be a different focus
39
+ in the future?
40
+ - Did William Barr maintain commitment to allow Robert Mueller to finish inquiry?
41
+ ---
42
+
43
+ # Model Card
44
+
45
+ This repository contains the model introduced in [StyleDistance: Stronger Content-Independent Style Embeddings with Synthetic Parallel Examples](https://huggingface.co/papers/2410.12757).
46
+
47
+ StyleDistance is a **style embedding model** that aims to embed texts with similar writing styles closely and different styles far apart, regardless of content. You may find this model useful for stylistic analysis of text, clustering, authorship identfication and verification tasks, and automatic style transfer evaluation.
48
+
49
+ ## Training Data and Variants of StyleDistance
50
+
51
+ StyleDistance was contrastively trained on [SynthSTEL](https://huggingface.co/datasets/StyleDistance/synthstel), a synthetically generated dataset of positive and negative examples of 40 style features being used in text. By utilizing this synthetic dataset, StyleDistance is able to achieve stronger content-independence than other style embeddding models currently available. This particular model was trained using a combination of the synthetic dataset and a [real dataset that makes use of authorship datasets from Reddit to train style embeddings](https://aclanthology.org/2022.repl4nlp-1.26/). For a version that is purely trained on synthetic data, see this other version of [StyleDistance](https://huggingface.co/StyleDistance/styledistance_synthetic_only).
52
+
53
+ ## Example Usage
54
+
55
+ ```python3
56
+ from sentence_transformers import SentenceTransformer
57
+ from sentence_transformers.util import cos_sim
58
+
59
+ model = SentenceTransformer('StyleDistance/styledistance') # Load model
60
+
61
+ input = model.encode("Did you hear about the Wales wing? He'll h8 2 withdraw due 2 injuries from future competitions.")
62
+ others = model.encode(["We're raising funds 2 improve our school's storage facilities and add new playground equipment!", "Did you hear about the Wales wing? He'll hate to withdraw due to injuries from future competitions."])
63
+ print(cos_sim(input, others))
64
+ ```
65
+
66
+ ---
67
+ ## Citation
68
+
69
+ ```latex
70
+ @misc{patel2025styledistancestrongercontentindependentstyle,
71
+ title={StyleDistance: Stronger Content-Independent Style Embeddings with Synthetic Parallel Examples},
72
+ author={Ajay Patel and Jiacheng Zhu and Justin Qiu and Zachary Horvitz and Marianna Apidianaki and Kathleen McKeown and Chris Callison-Burch},
73
+ year={2025},
74
+ eprint={2410.12757},
75
+ archivePrefix={arXiv},
76
+ primaryClass={cs.CL},
77
+ url={https://arxiv.org/abs/2410.12757},
78
+ }
79
+ ```
80
+
81
+ ---
82
+ ## Trained with DataDreamer
83
+
84
+ This model was trained with a synthetic dataset with [DataDreamer 🤖💤](https://datadreamer.dev). The synthetic dataset card and model card can be found [here](datadreamer.json). The training arguments can be found [here](training_args.json).
85
+
86
+ ---
87
+ #### Funding Acknowledgements
88
+
89
+ <small> This research is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the HIATUS Program contract #2022-22072200005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. </small>
classifier.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09bf2be7c3493f675f9d3c1f954eaae76d27d64d1bd5015c360c119df3ed40e4
3
+ size 11524266
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RobertaModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "roberta",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 1,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.51.3",
23
+ "type_vocab_size": 1,
24
+ "use_cache": true,
25
+ "vocab_size": 50265
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.1.0",
4
+ "transformers": "4.51.3",
5
+ "pytorch": "2.6.0+cu124"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f5151b176d1cce6f0911e572bc56dfa1ef37e2aa00aa51355a5a4ce70a9a981
3
+ size 498604904
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": true,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "50264": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": true,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "extra_special_tokens": {},
51
+ "mask_token": "<mask>",
52
+ "model_max_length": 512,
53
+ "pad_token": "<pad>",
54
+ "sep_token": "</s>",
55
+ "tokenizer_class": "RobertaTokenizer",
56
+ "trim_offsets": true,
57
+ "unk_token": "<unk>"
58
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff