apmoore1 commited on
Commit
11812ed
·
verified ·
1 Parent(s): faa342f

Added reference to paper

Browse files
Files changed (1) hide show
  1. README.md +18 -2
README.md CHANGED
@@ -19,7 +19,7 @@ configs:
19
 
20
  *Note* that this dataset cannot be loaded using the HuggingFace datasets library as we have 2 data types within a list in the `pos` column, however you can download the dataset and load it within Python using the `json` and `gzip` core modules, an example loading script can be found at [./example_loading_script.py](./example_loading_script.py).
21
 
22
- This dataset contains the processed English Wikipedia pages of the [Mosaico](https://github.com/SapienzaNLP/mosaico/tree/main) dataset, of which this is a subset of the original English Wikipedia dataset whereby it contains only the pages with the tags `good` and `featured`.
23
 
24
  Each entry in the dataset is a processed Wikipedia page containing the following annotations/tags:
25
  * Sentence boundaries
@@ -36,7 +36,7 @@ These annotations have been added automatically using the C version of the [CLAW
36
  - **Repository:** [https://github.com/UCREL/mosaico-usas-processing](https://github.com/UCREL/mosaico-usas-processing)
37
  - **Number of Samples** Contains 10,779 samples whereby each one represents a Wikipedia page.
38
 
39
- For more details about this dataset and how it was processed see the [GitHub repository.](https://github.com/UCREL/mosaico-usas-processing)
40
 
41
  ## Uses
42
 
@@ -650,6 +650,22 @@ The only keys that are not explained in the schema below are the following:
650
  }
651
  ```
652
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
653
  ## Dataset Card Authors
654
 
655
  * UCREL (ucrel@lancaster.ac.uk)
 
19
 
20
  *Note* that this dataset cannot be loaded using the HuggingFace datasets library as we have 2 data types within a list in the `pos` column, however you can download the dataset and load it within Python using the `json` and `gzip` core modules, an example loading script can be found at [./example_loading_script.py](./example_loading_script.py).
21
 
22
+ This dataset contains the processed English Wikipedia pages of the [Mosaico](https://github.com/SapienzaNLP/mosaico/tree/main) dataset, of which this is a subset of the original English Wikipedia dataset whereby it contains only the pages with the tags `good` and `featured`. This dataset was created as part of the following [paper](https://arxiv.org/abs/2601.09648) whereby we used this dataset to train neural based semantic taggers.
23
 
24
  Each entry in the dataset is a processed Wikipedia page containing the following annotations/tags:
25
  * Sentence boundaries
 
36
  - **Repository:** [https://github.com/UCREL/mosaico-usas-processing](https://github.com/UCREL/mosaico-usas-processing)
37
  - **Number of Samples** Contains 10,779 samples whereby each one represents a Wikipedia page.
38
 
39
+ For more details about this dataset and how it was processed see the [GitHub repository](https://github.com/UCREL/mosaico-usas-processing) and the associated [paper](https://arxiv.org/abs/2601.09648).
40
 
41
  ## Uses
42
 
 
650
  }
651
  ```
652
 
653
+ ## Citation
654
+
655
+ Paper: [Creating a Hybrid Rule and Neural Network Based Semantic Tagger using Silver Standard Data: the PyMUSAS framework for Multilingual Semantic Annotation](https://arxiv.org/abs/2601.09648)
656
+
657
+ ```
658
+ @misc{moore2026creatinghybridruleneural,
659
+ title={Creating a Hybrid Rule and Neural Network Based Semantic Tagger using Silver Standard Data: the PyMUSAS framework for Multilingual Semantic Annotation},
660
+ author={Andrew Moore and Paul Rayson and Dawn Archer and Tim Czerniak and Dawn Knight and Daisy Lal and Gearóid Ó Donnchadha and Mícheál Ó Meachair and Scott Piao and Elaine Uí Dhonnchadha and Johanna Vuorinen and Yan Yabo and Xiaobin Yang},
661
+ year={2026},
662
+ eprint={2601.09648},
663
+ archivePrefix={arXiv},
664
+ primaryClass={cs.CL},
665
+ url={https://arxiv.org/abs/2601.09648},
666
+ }
667
+ ```
668
+
669
  ## Dataset Card Authors
670
 
671
  * UCREL (ucrel@lancaster.ac.uk)