moha commited on
Commit
4cc7541
·
1 Parent(s): db758e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -14
README.md CHANGED
@@ -10,21 +10,19 @@ The model can achieve better results for the tasks that deal with multi-dialect
10
  # Classification results for multiple fake-news detection tasks with and without using the arabert_c19:
11
  For more details refer to the paper (link)
12
 
13
- | | \multicolumn{5}{c}{Without Fine-tuning} | \multicolumn{5}{c}{With Fine-tuning} |
14
- |------------------------------------|-----------------------------------------|------------------------------------------------|
15
- | | \multicolumn{3}{c}{Baseline models} | \multicolumn{2}{c}{Pretrained Covid-19 models} | \multicolumn{3}{c}{Baseline models} | \multicolumn{2}{c}{Pretrained Covid-19 models} |
16
- | | arabert | mbert | distilbert-multi | \textbf{arabert Cov19} | \textbf{mbert Cov19} | arabert | mbert | distilbert-multi | \textbf{arabert Cov19} | \textbf{mbert Cov19} |
17
- | \textbf{Contains hate} | 0.8346 | 0.6675 | 0.7145 | \textbf{0.8649} | 0.8492 | 0.9809 | 0.97 | 0.9736 | \textbf{0.9858} | 0.9809 |
18
- | \textbf{Talk about a cure} | 0.8193 | 0.7406 | 0.7127 | 0.9055 | \textbf{0.9176} | 0.99 | 0.9854 | 0.9774 | \textbf{0.9930} | 0.9904 |
19
- | \textbf{Give advice} | 0.8287 | 0.6865 | 0.6974 | \textbf{0.9035} | 0.8948 | 0.9793 | 0.9664 | 0.9764 | 0.9824 | \textbf{0.9862} |
20
- | \textbf{Rise moral } | 0.8398 | 0.7075 | 0.7049 | \textbf{0.8903} | 0.8838 | 0.9618 | 0.9663 | 0.9618 | 0.97 | \textbf{0.9712} |
21
- | \textbf{News or opinion } | 0.8987 | 0.8332 | 0.8099 | \textbf{0.9163} | 0.9116 | 0.9552 | 0.9409 | 0.9529 | \textbf{0.9627} | 0.9594 |
22
- | \textbf{Dialect} | 0.7533 | 0.558 | 0.5433 | \textbf{0.8230} | 0.7682 | 0.9266 | 0.9137 | 0.9102 | 0.9281 | \textbf{0.9317} |
23
- | \textbf{Blame and negative speech} | 0.7426 | 0.597 | 0.6221 | \textbf{0.7997} | 0.7794 | 0.9607 | 0.9476 | 0.9587 | \textbf{0.9653} | 0.9633 |
24
- | \textbf{Factual} | 0.9217 | 0.8427 | 0.8383 | 0.9575 | \textbf{0.9608} | 0.9958 | 0.9917 | 0.9925 | 0.995 | \textbf{0.9967} |
25
- | \textbf{Worth fact-checking} | 0.7731 | 0.5298 | 0.5413 | 0.8265 | \textbf{0.8383} | 0.9885 | 0.9824 | 0.9763 | \textbf{0.9907} | 0.9891 |
26
- | \textbf{Contains fake information} | 0.6415 | 0.5428 | 0.4743 | \textbf{0.7739} | 0.7228 | 0.9417 | 0.9353 | 0.9288 | \textbf{0.9578} | 0.9491 |
27
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
 
30
  # Preprocessing
 
10
  # Classification results for multiple fake-news detection tasks with and without using the arabert_c19:
11
  For more details refer to the paper (link)
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
+ | | arabert | mbert | distilbert multi | arabert Covid-19 | mbert Covid-19 |
15
+ |---------------------------|----------|----------|------------------|------------------|----------------|
16
+ | Contains hate | 0.8346 | 0.6675 | 0.7145 | 0.8649 | 0.8492 |
17
+ | Talk about a cure | 0.8193 | 0.7406 | 0.7127 | 0.9055 | 0.9176 |
18
+ | Give advice | 0.8287 | 0.6865 | 0.6974 | 0.9035 | 0.8948 |
19
+ | Rise moral | 0.8398 | 0.7075 | 0.7049 | 0.8903 | 0.8838 |
20
+ | News or opinion | 0.8987 | 0.8332 | 0.8099 | 0.9163 | 0.9116 |
21
+ | Dialect | 0.7533 | 0.558 | 0.5433 | 0.823 | 0.7682 |
22
+ | Blame and negative speech | 0.7426 | 0.597 | 0.6221 | 0.7997 | 0.7794 |
23
+ | Factual | 0.9217 | 0.8427 | 0.8383 | 0.9575 | 0.9608 |
24
+ | Worth fact-checking | 0.7731 | 0.5298 | 0.5413 | 0.8265 | 0.8383 |
25
+ | Contains fake information | 0.6415 | 0.5428 | 0.4743 | 0.7739 | 0.7228 |
26
 
27
 
28
  # Preprocessing