jenslemmens commited on
Commit
23463e5
·
1 Parent(s): ffd5e25

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -1,10 +1,7 @@
1
  # RePublic
2
 
3
  ### Model description
4
- RePublic (reputation analyzer for public agencies) is a Dutch BERT model based on BERTje (De Vries, 2019).
5
-
6
- ### Intended use
7
- The model was designed to predict the sentiment in Dutch-language news article text about public agencies.
8
 
9
  ### How to use
10
  The model can be loaded and used to make predictions as follows:
@@ -22,11 +19,14 @@ prediction = output[0]['label'] # 0=”neutral”; 1=”positive”; 2=”negat
22
  ### Training and data procedure
23
  RePublic was domain-adapted on 91,661 Flemish news articles from three popular Flemish news providers (“Het Laatste Nieuws”, “Het Nieuwsblad” and “De Morgen”) that mention public agencies. This was done by performing BERT’s language modeling tasks (masked language modeling & next sentence prediction).
24
 
25
- The model was then fine-tuned on a sentiment classification task (“positive”, “negative”, “neutral”). The data consisted of 4,404 annotated sentences mentioning Flemish public agencies and fine-tuning was performed for 4 epochs using a batch size of 8 and a learning rate of 5e-5.
26
-
27
- [TABLE 1: STATISTICS OF FINE-TUNING DATA]
28
 
29
  ### Evaluation
30
- The model was evaluated by performing 10-fold cross validation on the annotated data described above. During cross validation, the optimal number of epochs (4), batch size (8), and learning rate (5e-5) were determined.
31
-
32
- [TABLE 2: CROSS VALIDATION RESULTS]
 
 
 
 
 
 
1
  # RePublic
2
 
3
  ### Model description
4
+ RePublic (<u>re</u>putation analyzer for <u>public</u> agencies) is a Dutch BERT model based on BERTje (De Vries, 2019). The model was designed to predict the sentiment in Dutch-language news article text about public agencies. RePublic was developed in co-operation with [Jan Boon](https://www.uantwerpen.be/en/staff/jan-boon/).
 
 
 
5
 
6
  ### How to use
7
  The model can be loaded and used to make predictions as follows:
 
19
  ### Training and data procedure
20
  RePublic was domain-adapted on 91,661 Flemish news articles from three popular Flemish news providers (“Het Laatste Nieuws”, “Het Nieuwsblad” and “De Morgen”) that mention public agencies. This was done by performing BERT’s language modeling tasks (masked language modeling & next sentence prediction).
21
 
22
+ The model was then fine-tuned on a sentiment classification task (“positive”, “negative”, “neutral”). The supervised data consisted of 4,404 annotated sentences mentioning Flemish public agencies of which 1,257 sentences were positive, 1,485 sentences were negative and 1,662 sentences were neutral. Fine-tuning was performed for 4 epochs using a batch size of 8 and a learning rate of 5e-5.
 
 
23
 
24
  ### Evaluation
25
+ The model was evaluated by performing 10-fold cross validation on the annotated data described above. During cross validation, the optimal number of epochs (4), batch size (8), and learning rate (5e-5) were determined. The standard deviation of the macro-averaged F1-scores of the cross validation experiments amounts to 1.5%. The detailed results of predictions in the cross validation experiments can be found below:
26
+
27
+ | **Class** | **Precision (%)** | **Recall (%)** | **F1-score (%)** |
28
+ |:---:|:---:|:---:|:---:|
29
+ | _Positive_ | 87.3 | 88.6 | 88.0 |
30
+ | _Negative_ | 86.4 | 86.5 | 86.5 |
31
+ | _Neutral_ | 85.3 | 84.2 | 84.7 |
32
+ | _Macro-averaged_ | 86.3 | 86.4 | 86.4 |