Datasets:
Tasks:
Token Classification
Modalities:
Text
Languages:
English
Size:
100K - 1M
ArXiv:
Tags:
abbreviation-detection
License:
Commit
·
3ea7e16
1
Parent(s):
e31fb85
Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,7 @@ licenses:
|
|
| 9 |
- cc-by-sa4.0
|
| 10 |
multilinguality:
|
| 11 |
- monolingual
|
| 12 |
-
paperswithcode_id:
|
| 13 |
pretty_name: 'PLOD: An Abbreviation Detection Dataset'
|
| 14 |
size_categories:
|
| 15 |
- 100K<n<1M
|
|
@@ -156,10 +156,14 @@ Now, you can use the notebook to reproduce the experiments.
|
|
| 156 |
|
| 157 |
### Model(s)
|
| 158 |
|
| 159 |
-
The working model(s) are present here at these links:<br/>
|
| 160 |
|
| 161 |
-
|
| 162 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 163 |
|
| 164 |
On the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.<br/>
|
| 165 |
|
|
|
|
| 9 |
- cc-by-sa4.0
|
| 10 |
multilinguality:
|
| 11 |
- monolingual
|
| 12 |
+
paperswithcode_id: plod-filtered
|
| 13 |
pretty_name: 'PLOD: An Abbreviation Detection Dataset'
|
| 14 |
size_categories:
|
| 15 |
- 100K<n<1M
|
|
|
|
| 156 |
|
| 157 |
### Model(s)
|
| 158 |
|
|
|
|
| 159 |
|
| 160 |
+
Our best performing models are hosted on the HuggingFace models repository
|
| 161 |
+
|
| 162 |
+
| Models | [`PLOD - Unfiltered`](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) | [`PLOD - Filtered`](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) | Description |
|
| 163 |
+
| --- | :---: | :---: | --- |
|
| 164 |
+
| [RoBERTa<sub>large</sub>](https://huggingface.co/roberta-large) | [RoBERTa<sub>large</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | -soon- | Fine-tuning on the RoBERTa<sub>large</sub> language model |
|
| 165 |
+
| [RoBERTa<sub>base</sub>](https://huggingface.co/roberta-base) | -soon- | [RoBERTa<sub>base</sub>-finetuned-abbr](https://huggingface.co/surrey-nlp/roberta-large-finetuned-abbr) | Fine-tuning on the RoBERTa<sub>base</sub> language model |
|
| 166 |
+
| [AlBERT<sub>large-v2</sub>](https://huggingface.co/albert-large-v2) | [AlBERT<sub>large-v2</sub>-finetuned-abbDet](https://huggingface.co/surrey-nlp/albert-large-v2-finetuned-abbDet) | -soon- | Fine-tuning on the AlBERT<sub>large-v2</sub> language model |
|
| 167 |
|
| 168 |
On the link provided above, the model(s) can be used with the help of the Inference API via the web-browser itself. We have placed some examples with the API for testing.<br/>
|
| 169 |
|