Wesley Morris commited on
Commit
4c0c633
·
verified ·
1 Parent(s): 69e36dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -6
README.md CHANGED
@@ -8,28 +8,38 @@ model-index:
8
  results: []
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
-
14
  # grammar_checkpoints
15
 
16
- This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
 
 
 
 
 
 
 
 
17
  It achieves the following results on the evaluation set:
18
  - Loss: 0.1817
19
  - Mse: 0.1817
20
  - Rmse: 0.4263
21
 
 
 
 
 
22
  ## Model description
23
 
24
  More information needed
25
 
26
  ## Intended uses & limitations
27
 
28
- More information needed
 
29
 
30
  ## Training and evaluation data
31
 
32
- More information needed
33
 
34
  ## Training procedure
35
 
 
8
  results: []
9
  ---
10
 
 
 
 
11
  # grammar_checkpoints
12
 
13
+ This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on a dataset consisting of 4,620 summaries,
14
+ scored on an analytic rubric by expert raters. This model predicts the raw score for Language Beyond the Source. The rubric is as follows:
15
+
16
+ LANGUAGE BEYOND THE SOURCE
17
+ - 1 Point: Summary shows a very basic understanding of lexical and syntactic structures.
18
+ - 2 Points: Summary shows an understanding of lexical and syntactic structures.
19
+ - 3 Points: Summary shows an appropriate range of lexical and syntactic structures.
20
+ - 4 Points: Summary shows an excellent range of lexical and syntactic structures.
21
+
22
  It achieves the following results on the evaluation set:
23
  - Loss: 0.1817
24
  - Mse: 0.1817
25
  - Rmse: 0.4263
26
 
27
+ On set of summaries of sources that were withheld from the training set, the model achieved the following results:
28
+ - Rmse: 0.4220
29
+ - R2: 0.6236
30
+
31
  ## Model description
32
 
33
  More information needed
34
 
35
  ## Intended uses & limitations
36
 
37
+ This model is intended to be used to provide feedback to users of iTELL, a framework for generating intelligent educational texts. More information about iTELL can be found
38
+ here: ![iTELL Video](https://www.youtube.com/watch?v=YZXVQjSDZtI)
39
 
40
  ## Training and evaluation data
41
 
42
+ Seventy summaries in the training set had Language Beyond the Source scores of <1, which is outside of the rubric. These summaries were removed from the training and test sets.
43
 
44
  ## Training procedure
45