Wesley Morris commited on
Commit
fd0be49
·
verified ·
1 Parent(s): 7fd154d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -1
README.md CHANGED
@@ -1,3 +1,51 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
1
  ---
2
+ language:
3
+ - "en"
4
+ thumbnail: "url to a thumbnail used in social sharing"
5
+ tags:
6
+ - 'macroeconomics'
7
+ - 'automated summary evaluation'
8
+ - 'content'
9
+ license: "apache-2.0"
10
+ metrics:
11
+ - 'mse'
12
+ ---
13
+
14
+ # Content Model
15
+ This is a longformer model with a regression head designed to predict the Content score of a summary.
16
+ By default, longformers assign global attention to only the classification token with a sliding attention window which moves across the rest of the test.
17
+ This model, however, is trained to assign global attention to the entire summary with a reduced sliding window.
18
+ When performing inference using this model, you should make sure to assign a custom global attention mask as follows:
19
+
20
+ ```
21
+ def inference(summary, source, model):
22
+ combined = summary + tokenizer.sep_token + source
23
+ context = tokenizer(combined)
24
+ sep_index = context['input_ids'].index(2)
25
+ context['global_attention_mask'] = [1]*(sep_index + 1) + [0]*(len(context['input_ids'])-(sep_index + 1))
26
+ inputs = {}
27
+ for key in context:
28
+ inputs[key] = torch.tensor([context[key]])
29
+ return float(model(**inputs)['logits'][0][0])
30
+ ```
31
+
32
+ ## Corpus
33
+ It was trained on a corpus of 4,233 summaries of 101 sources compiled by Botarleanu et al. (2022).
34
+ The summaries were graded by expert raters on 6 criteria: Details, Main Point, Cohesion, Paraphrasing, Objective Language, and Language Beyond the Text.
35
+ A principle component analyis was used to reduce the dimensionality of the outcome variables to two.
36
+ * **Content** includes Details, Main Point, Paraphrasing and Cohesion
37
+ * **Wording** includes Objective Language, and Language Beyond the Text
38
+
39
+ ## Score
40
+ This model predicts the Content score. The model to predict the Wording score can be found [here](https://huggingface.co/tiedaar/longformer-wording-global).
41
+ The following diagram illustrates the model architecture:
42
+
43
+ ![model diagram](model_diagram.png)
44
+
45
+ When providing input to the model, the summary and the source should be concatenated using the seperator token \</s>.
46
+ This allows the model to have access to both the summary and the source to provide more accurate scores. The model reported an R2 of 0.82 on the test set of summaries.
47
+ ![content scatter](content_scatter.png)
48
+
49
+ ## Contact
50
+ For questions or comments about this model, please contact [wesley.g.morris@vanderbilt.edu](wesley.g.morris@vanderbilt.edu).
51
  ---