Update README.md
Browse files
README.md
CHANGED
|
@@ -47,12 +47,3 @@ pretty_name: WikiSplit++
|
|
| 47 |
size_categories:
|
| 48 |
- 10M<n<100M
|
| 49 |
---
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
Preprocessed version of [WikiSplit](https://arxiv.org/abs/1808.09468).
|
| 53 |
-
|
| 54 |
-
Since the [original WikiSplit dataset](https://huggingface.co/datasets/wiki_split) was tokenized and had some noises, we have used the [Moses detokenizer](https://github.com/moses-smt/mosesdecoder/blob/c41ff18111f58907f9259165e95e657605f4c457/scripts/tokenizer/detokenizer.perl) for detokenization and removed text fragments.
|
| 55 |
-
|
| 56 |
-
For detailed information on the preprocessing steps, please see [here](https://github.com/nttcslab-nlp/wikisplit-pp/src/datasets/common.py).
|
| 57 |
-
|
| 58 |
-
This preprocessed dataset serves as the basis for [WikiSplit++](https://huggingface.co/datasets/cl-nagoya/wikisplit-pp).
|
|
|
|
| 47 |
size_categories:
|
| 48 |
- 10M<n<100M
|
| 49 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|