Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
AnnaWegmann commited on
Commit
3fb44b9
·
verified ·
1 Parent(s): 2aae9e6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -0
README.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - en
5
+ ---
6
+
7
+ CORE task as adapted for "Tokenization is Sensitive to Language Variation paper", see [arxiv](https://arxiv.org/abs/2502.15343).
8
+ Originally downloaded from [https://github.com/TurkuNLP/CORE-corpus](https://github.com/TurkuNLP/CORE-corpus), see also [Register identification from the unrestricted open Web using the Corpus of Online Registers of English](https://link.springer.com/article/10.1007/s10579-022-09624-1). If you want to use the original CORE dataset refer to these original sources.
9
+
10
+ ```
11
+ @article{wegmann2025tokenization,
12
+ title={Tokenization is Sensitive to Language Variation},
13
+ author={Wegmann, Anna and Nguyen, Dong and Jurgens, David},
14
+ journal={arXiv preprint arXiv:2502.15343},
15
+ year={2025}
16
+ }
17
+ ```