Update README.md
Browse files
README.md
CHANGED
|
@@ -20,13 +20,13 @@ It achieves the following results on the evaluation set:
|
|
| 20 |
|
| 21 |
MiniLM is a distilled model from the paper "MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers".
|
| 22 |
|
| 23 |
-
We fine tune this model to evaluate (regression) the clickbait level of
|
| 24 |
|
| 25 |
## Intended uses & limitations
|
| 26 |
|
| 27 |
-
Model was designed to
|
| 28 |
|
| 29 |
-
The model wa trained
|
| 30 |
|
| 31 |
## Training and evaluation data
|
| 32 |
|
|
|
|
| 20 |
|
| 21 |
MiniLM is a distilled model from the paper "MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers".
|
| 22 |
|
| 23 |
+
We fine tune this model to evaluate (regression) the clickbait level of title news.
|
| 24 |
|
| 25 |
## Intended uses & limitations
|
| 26 |
|
| 27 |
+
Model was designed to work on Transformers (like in the paper "Predicting Clickbait Strength in Online Social Media" by Indurthi Vijayasaradhi, Syed Bakhtiyar, Gupta Manish, Varma Vasudeva).
|
| 28 |
|
| 29 |
+
The model wa trained with english titles.
|
| 30 |
|
| 31 |
## Training and evaluation data
|
| 32 |
|