yting27 commited on
Commit
1afb796
·
1 Parent(s): 5c66522

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -90,9 +90,9 @@ while negative examples are those that do not belong to that label.
90
  For example, positive examples for `joy` are all the inputs with `joy` as target, while negative examples are inputs from sadness, fear, love, anger, and surprise.
91
 
92
  `False Positive Rate` heatmap shows the proportion of negative samples that are incorrectly classified as positive.
93
- The `joy` class with the length between 60 and 75 is the worst (0.5) because we only have two test examples related to this group.
94
  One of the examples is classified wrongly as `joy`. The outcome makes sense because the training data with
95
- sequence tokens between 60 and 75 are the least. This explains why the model performs worst in this length range.
96
 
97
  `False Negative Rate` heatmap displays the proportion of positive samples that are incorrectly classified as negative.
98
  You should notice darker colors (0.56) in the cells of the `surprise` class, especially for sequence length ranges 0 - 15 and 30 - 45.
@@ -102,8 +102,8 @@ For the `sadness` class with sequence length (60 - 75), the situation is the sam
102
 
103
  `False Discovery Rate` heatmap depicts the proportion of positive predictions that are incorrectly classified.
104
  The darker cells (0.25) belong to the `surprise` class with sequence length ranges (0 - 15) and (45 - 60).
105
- The other two cells of `surprise` class are better with 0.16 and 0 respectively. The same goes for the `fear` class with the darkest cell for (45 - 60) range group compared to other ranges.
106
- This could be explained by the number of training examples available for the concerned sequence length ranges.
107
  For `surprise` class, most training examples are (15 - 30) in terms of sequence length.
108
  For `fear` class, most examples have sequence lengths of (0 - 15) and (15 - 30).
109
  Limited training examples from certain groups could produce poor results when the model is evaluated with the data from these groups.
 
90
  For example, positive examples for `joy` are all the inputs with `joy` as target, while negative examples are inputs from sadness, fear, love, anger, and surprise.
91
 
92
  `False Positive Rate` heatmap shows the proportion of negative samples that are incorrectly classified as positive.
93
+ The `joy` class with the length group between 60 and 75 is the worst (0.5) because we only have two test examples related to this group.
94
  One of the examples is classified wrongly as `joy`. The outcome makes sense because the training data with
95
+ sequence length between 60 and 75 are the least. This explains why the model performs worst in this length group.
96
 
97
  `False Negative Rate` heatmap displays the proportion of positive samples that are incorrectly classified as negative.
98
  You should notice darker colors (0.56) in the cells of the `surprise` class, especially for sequence length ranges 0 - 15 and 30 - 45.
 
102
 
103
  `False Discovery Rate` heatmap depicts the proportion of positive predictions that are incorrectly classified.
104
  The darker cells (0.25) belong to the `surprise` class with sequence length ranges (0 - 15) and (45 - 60).
105
+ The other two cells of `surprise` class are better with 0.16 and 0 respectively. The same goes for the `fear` class with the darkest cell (0.22) for (45 - 60) range group compared to other ranges.
106
+ This situation could be explained by the number of training examples available for the concerned sequence length ranges.
107
  For `surprise` class, most training examples are (15 - 30) in terms of sequence length.
108
  For `fear` class, most examples have sequence lengths of (0 - 15) and (15 - 30).
109
  Limited training examples from certain groups could produce poor results when the model is evaluated with the data from these groups.