ighina commited on
Commit
9d3d5c4
·
1 Parent(s): 3a3108a

Upload 13 files

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md CHANGED
@@ -1,3 +1,122 @@
1
  ---
2
- license: gpl-3.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+ - transformers
8
  ---
9
+
10
+ # {MODEL_NAME}
11
+
12
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
13
+
14
+ <!--- Describe your model here -->
15
+
16
+ ## Usage (Sentence-Transformers)
17
+
18
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
19
+
20
+ ```
21
+ pip install -U sentence-transformers
22
+ ```
23
+
24
+ Then you can use the model like this:
25
+
26
+ ```python
27
+ from sentence_transformers import SentenceTransformer
28
+ sentences = ["This is an example sentence", "Each sentence is converted"]
29
+
30
+ model = SentenceTransformer('{MODEL_NAME}')
31
+ embeddings = model.encode(sentences)
32
+ print(embeddings)
33
+ ```
34
+
35
+
36
+
37
+ ## Usage (HuggingFace Transformers)
38
+ Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
39
+
40
+ ```python
41
+ from transformers import AutoTokenizer, AutoModel
42
+ import torch
43
+
44
+
45
+ def cls_pooling(model_output, attention_mask):
46
+ return model_output[0][:,0]
47
+
48
+
49
+ # Sentences we want sentence embeddings for
50
+ sentences = ['This is an example sentence', 'Each sentence is converted']
51
+
52
+ # Load model from HuggingFace Hub
53
+ tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
54
+ model = AutoModel.from_pretrained('{MODEL_NAME}')
55
+
56
+ # Tokenize sentences
57
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
58
+
59
+ # Compute token embeddings
60
+ with torch.no_grad():
61
+ model_output = model(**encoded_input)
62
+
63
+ # Perform pooling. In this case, cls pooling.
64
+ sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
65
+
66
+ print("Sentence embeddings:")
67
+ print(sentence_embeddings)
68
+ ```
69
+
70
+
71
+
72
+ ## Evaluation Results
73
+
74
+ <!--- Describe how your model was evaluated -->
75
+
76
+ For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
77
+
78
+
79
+ ## Training
80
+ The model was trained with the parameters:
81
+
82
+ **DataLoader**:
83
+
84
+ `torch.utils.data.dataloader.DataLoader` of length 11254 with parameters:
85
+ ```
86
+ {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
87
+ ```
88
+
89
+ **Loss**:
90
+
91
+ `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
92
+
93
+ Parameters of the fit()-Method:
94
+ ```
95
+ {
96
+ "epochs": 10,
97
+ "evaluation_steps": 0,
98
+ "evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
99
+ "max_grad_norm": 1,
100
+ "optimizer_class": "<class 'transformers.optimization.AdamW'>",
101
+ "optimizer_params": {
102
+ "lr": 2e-05
103
+ },
104
+ "scheduler": "WarmupLinear",
105
+ "steps_per_epoch": null,
106
+ "warmup_steps": 10000,
107
+ "weight_decay": 0.01
108
+ }
109
+ ```
110
+
111
+
112
+ ## Full Model Architecture
113
+ ```
114
+ SentenceTransformer(
115
+ (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
116
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
117
+ )
118
+ ```
119
+
120
+ ## Citing & Authors
121
+
122
+ <!--- Describe where people can find more information -->
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "roberta-base",
3
+ "architectures": [
4
+ "RobertaModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 514,
17
+ "model_type": "roberta",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "pad_token_id": 1,
21
+ "position_embedding_type": "absolute",
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.18.0",
24
+ "type_vocab_size": 1,
25
+ "use_cache": true,
26
+ "vocab_size": 50265
27
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.2.0",
4
+ "transformers": "4.18.0",
5
+ "pytorch": "1.11.0"
6
+ }
7
+ }
eval/binary_classification_evaluation_Valid_Topic_Boundaries_results.csv ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ epoch,steps,cossim_accuracy,cossim_accuracy_threshold,cossim_f1,cossim_precision,cossim_recall,cossim_f1_threshold,cossim_ap,manhatten_accuracy,manhatten_accuracy_threshold,manhatten_f1,manhatten_precision,manhatten_recall,manhatten_f1_threshold,manhatten_ap,euclidean_accuracy,euclidean_accuracy_threshold,euclidean_f1,euclidean_precision,euclidean_recall,euclidean_f1_threshold,euclidean_ap,dot_accuracy,dot_accuracy_threshold,dot_f1,dot_precision,dot_recall,dot_f1_threshold,dot_ap
2
+ 0,-1,0.9351431798170055,0.03253564238548279,0.9642184656768374,0.9441936616808528,0.9851110608167092,0.0018059015274047852,0.9825107572102127,0.9300084223421768,426.22076416015625,0.9615344154682881,0.9390892932364817,0.9850787287012189,426.22076416015625,0.9743905779006213,0.9292140423414111,19.793060302734375,0.9610708092903809,0.9390770021853708,0.9841195426083396,19.841121673583984,0.9742268613318887,0.9350618276482524,0.2832818031311035,0.9642135687800968,0.944179320318149,0.9851164495026243,0.2832818031311035,0.979057264064191
3
+ 1,-1,0.9361385475288082,0.03975662589073181,0.9645548304501762,0.9510430906700607,0.978456033711619,0.03975662589073181,0.9846523652120203,0.9300132077638682,361.8736572265625,0.961585524023131,0.9358268861020304,0.9888023106685204,369.8770751953125,0.9761997037497853,0.9279315493281268,17.78215217590332,0.9606661164450109,0.9321142208391198,0.9910224492655221,17.78215217590332,0.9758988423759445,0.9358609930707094,2.2912721633911133,0.9644921115622148,0.947114710328241,0.9825191028915689,0.30298882722854614,0.9802788577068278
4
+ 2,-1,0.9346263542743386,-0.05199187994003296,0.9639316388253031,0.9449577852665145,0.9836830590492203,-0.05199187994003296,0.9839727722012924,0.9316785345124612,391.32452392578125,0.9623966691860205,0.9411340540707587,0.9846422451420996,392.30291748046875,0.9759730653032622,0.9315349718617204,18.330825805664062,0.9623455847908585,0.940480844409695,0.9852511666505006,18.35348892211914,0.9758775258856502,0.9349804754794993,-6.6156768798828125,0.9641292897932028,0.9450940202791882,0.9839471046590579,-6.619906425476074,0.9770134030881863
5
+ 3,-1,0.9338032617434249,-0.0330546498298645,0.9634424118581093,0.942085558468043,0.9857900352420059,-0.08770674467086792,0.9837350185443772,0.9295681635465717,425.2204895019531,0.9611940770739733,0.9410370783384444,0.9822335025380711,425.2204895019531,0.9762963113112479,0.9301663412579917,19.98107147216797,0.9615585154287339,0.9402694518262159,0.9838339422548418,20.068241119384766,0.9764071041956293,0.9342339496956472,-4.741467475891113,0.9636745880385438,0.9438734646845143,0.9843243126731115,-10.52090835571289,0.9744901321491117
6
+ 4,-1,0.9321905746334367,-0.06932997703552246,0.9625697807926306,0.9433548380421231,0.9825837671225495,-0.08262169361114502,0.9819873292936037,0.9257828949887064,448.29962158203125,0.9589935723369883,0.9414177070625795,0.9772381906948172,448.3228759765625,0.9765104069094425,0.9264384977604226,21.610397338867188,0.9595439915998452,0.9377169926807565,0.9824113291732678,21.64745330810547,0.9766631053546229,0.9326547605374985,-10.607145309448242,0.9627849045227725,0.9452744068667923,0.9809563839762035,-10.607145309448242,0.968841205976341
7
+ 5,-1,0.9304869645113127,-0.10840314626693726,0.9616936440879099,0.9416831928606841,0.9825729897507194,-0.10840314626693726,0.9805273889384319,0.9218109949848781,465.69500732421875,0.9567788851120255,0.9347823851777214,0.9798355373058726,482.79522705078125,0.976460857976747,0.9238400137820145,22.011795043945312,0.9579792443814367,0.9390368242631266,0.9777016176835117,22.273025512695312,0.9769150602665715,0.9311090693311894,-20.66373062133789,0.9620382649227662,0.9417898197998235,0.9831765225732053,-21.344547271728516,0.9650896668450528
8
+ 6,-1,0.9297595804142261,-0.11526083946228027,0.9613351306380783,0.9398570943198081,0.9838177761970965,-0.13755464553833008,0.9790599440737546,0.9192747214884576,488.8060302734375,0.9553003846978678,0.9390136629798308,0.9721620485628375,491.7649841308594,0.9774856396582972,0.9220981202863596,23.230091094970703,0.9569578134284017,0.9383068963910078,0.9763652235765786,23.55148696899414,0.978206139654004,0.930290762221967,-23.9328670501709,0.9616385514270754,0.9396459075739578,0.9846853546294201,-31.707096099853516,0.9614765968099016
9
+ 7,-1,0.9265054936641017,-0.1835535764694214,0.9596003087473156,0.935659430677862,0.9847985170336362,-0.20787841081619263,0.9778827096346218,0.9082203973814172,485.916259765625,0.9486163466407324,0.9433015233619463,0.9539913996572795,485.94525146484375,0.9781362005466554,0.9162168370276789,24.623783111572266,0.9540399345335515,0.9279728580088741,0.98161380365784,25.75216293334961,0.9792439970989311,0.9269505378813981,-34.71306610107422,0.9597177657688085,0.9402953433780415,0.9799594770819188,-34.979286193847656,0.9600584162141225
10
+ 8,-1,0.9271563110141265,-0.1436033844947815,0.9599189004160302,0.9385858315397052,0.9822442799099012,-0.16852247714996338,0.9764946890547797,0.9060238888250832,477.47314453125,0.9472032909522305,0.9185215934230043,0.977733949799002,540.2987060546875,0.9772292219322706,0.9165709582328395,25.156723022460938,0.9541444075211375,0.9315636005215766,0.9778471122032181,25.505550384521484,0.9784687360123002,0.9270223192067685,-35.64582061767578,0.9598441366963298,0.9384182943278111,0.9822712233394765,-40.78765106201172,0.9574834411448182
11
+ 9,-1,0.9263284330615214,-0.20765244960784912,0.9595969403151529,0.935210806553084,0.985288887451906,-0.21522092819213867,0.9757302294286048,0.9024443933999464,540.8711547851562,0.9466823931856401,0.9196930816331715,0.9753036524513132,541.1339111328125,0.9767848559763449,0.9157047969067034,25.84136962890625,0.9537284354083524,0.927667573432772,0.9812958711888519,26.455978393554688,0.9781532611800835,0.9264672102905708,-33.364227294921875,0.9594177955632516,0.9403348821806666,0.9792912800284522,-35.31359100341797,0.9564924835658134
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:045debd05bcec8a094491b57027d14d34bc8d581e54c5b7ace90c4026f6afaac
3
+ size 498652017
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 128,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"errors": "replace", "bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": "<mask>", "add_prefix_space": false, "trim_offsets": true, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "roberta-base", "tokenizer_class": "RobertaTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff