Add size and dedup info.
Browse files
README.md
CHANGED
|
@@ -1,11 +1,13 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-sa-4.0
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
# XLM-R-BERTić dataset
|
| 5 |
|
| 6 |
## Composition and usage
|
| 7 |
|
| 8 |
-
This dataset consists of the following splits:
|
| 9 |
* macocu_hbs
|
| 10 |
* hr_news
|
| 11 |
* bswac
|
|
@@ -20,6 +22,8 @@ This dataset consists of the following splits:
|
|
| 20 |
* riznica
|
| 21 |
* srwac
|
| 22 |
|
|
|
|
|
|
|
| 23 |
The entire dataset can be downloaded and used as follows:
|
| 24 |
```python
|
| 25 |
import datasets
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-sa-4.0
|
| 3 |
+
size_categories:
|
| 4 |
+
- 10B<n<100B
|
| 5 |
---
|
| 6 |
# XLM-R-BERTić dataset
|
| 7 |
|
| 8 |
## Composition and usage
|
| 9 |
|
| 10 |
+
This dataset contains 11.5 billion words and consists of the following splits:
|
| 11 |
* macocu_hbs
|
| 12 |
* hr_news
|
| 13 |
* bswac
|
|
|
|
| 22 |
* riznica
|
| 23 |
* srwac
|
| 24 |
|
| 25 |
+
The dataset was deduplicated with `onion` on the basis of 5-tuples of words with duplicate threshold set to 90%.
|
| 26 |
+
|
| 27 |
The entire dataset can be downloaded and used as follows:
|
| 28 |
```python
|
| 29 |
import datasets
|