Update README.md
Browse files
README.md
CHANGED
|
@@ -117,8 +117,8 @@ Below, we present the performance of **L-Lens: LlamaLens** , where *"Eng"* refe
|
|
| 117 |
|
| 118 |
## English
|
| 119 |
|
| 120 |
-
| **Task** | **Dataset** | **Metric** | **SOTA** | **
|
| 121 |
-
|:----------------------------------:|:--------------------------------------------:|:----------:|:--------:|:---------------------:|:---------------------:|:--------------------:|:------------------------:|
|
| 122 |
| Checkworthiness Detection | CT24_checkworthy | f1_pos | 0.753 | 0.404 | 0.942 | 0.942 | 0.189 |
|
| 123 |
| Claim Detection | claim-detection | Mi-F1 | -- | 0.545 | 0.864 | 0.889 | -- |
|
| 124 |
| Cyberbullying Detection | Cyberbullying | Acc | 0.907 | 0.175 | 0.836 | 0.855 | -0.071 |
|
|
@@ -142,7 +142,7 @@ Below, we present the performance of **L-Lens: LlamaLens** , where *"Eng"* refe
|
|
| 142 |
|
| 143 |
## Hindi
|
| 144 |
|
| 145 |
-
| **Task** | **Dataset** | **Metric** | **SOTA** | **
|
| 146 |
|:----------------------------------:|:--------------------------------------------:|:----------:|:--------:|:---------------------:|:---------------------:|:--------------------:|:------------------------:|
|
| 147 |
| Factuality | fake-news | Mi-F1 | -- | 0.759 | 0.994 | 0.993 | -- |
|
| 148 |
| Hate Speech Detection | hate-speech-detection | Mi-F1 | 0.639 | 0.750 | 0.963 | 0.963 | 0.324 |
|
|
|
|
| 117 |
|
| 118 |
## English
|
| 119 |
|
| 120 |
+
| **Task** | **Dataset** | **Metric** | **SOTA** | **Base** | **L-Lens-Eng** | **L-Lens-Native** | **Δ (L-Lens (Eng) - SOTA)** |
|
| 121 |
+
|:----------------------------------:|:--------------------------------------------:|:----------:|:--------:|:---------------------:|:---------------------:|:--------------------:|:------------------------:|
|
| 122 |
| Checkworthiness Detection | CT24_checkworthy | f1_pos | 0.753 | 0.404 | 0.942 | 0.942 | 0.189 |
|
| 123 |
| Claim Detection | claim-detection | Mi-F1 | -- | 0.545 | 0.864 | 0.889 | -- |
|
| 124 |
| Cyberbullying Detection | Cyberbullying | Acc | 0.907 | 0.175 | 0.836 | 0.855 | -0.071 |
|
|
|
|
| 142 |
|
| 143 |
## Hindi
|
| 144 |
|
| 145 |
+
| **Task** | **Dataset** | **Metric** | **SOTA** | **Base** | **L-Lens-Eng** | **L-Lens-Native** | **Δ (L-Lens (Eng) - SOTA)** |
|
| 146 |
|:----------------------------------:|:--------------------------------------------:|:----------:|:--------:|:---------------------:|:---------------------:|:--------------------:|:------------------------:|
|
| 147 |
| Factuality | fake-news | Mi-F1 | -- | 0.759 | 0.994 | 0.993 | -- |
|
| 148 |
| Hate Speech Detection | hate-speech-detection | Mi-F1 | 0.639 | 0.750 | 0.963 | 0.963 | 0.324 |
|